Jan 27 07:16:25 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 07:16:25 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:16:27 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 07:16:28 crc kubenswrapper[4764]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.175972 4764 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183910 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183951 4764 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183961 4764 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183971 4764 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183982 4764 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.183993 4764 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184002 4764 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184012 4764 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184021 4764 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184029 4764 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184037 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184046 4764 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184054 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184062 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184070 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184078 4764 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184086 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184098 4764 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184108 4764 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184118 4764 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184142 4764 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184153 4764 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184162 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184176 4764 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184187 4764 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184197 4764 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184207 4764 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184218 4764 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184228 4764 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184238 4764 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184249 4764 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184259 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184270 4764 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184282 4764 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184291 4764 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184300 4764 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184310 4764 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184319 4764 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184328 4764 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184337 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184345 4764 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184355 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184363 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184371 4764 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184380 4764 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184388 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184395 4764 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184405 4764 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184415 4764 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184426 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184434 4764 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184476 4764 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184485 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184493 4764 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184502 4764 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184512 4764 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184520 4764 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184528 4764 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184536 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184544 4764 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184553 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184561 4764 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184568 4764 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184576 4764 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184584 4764 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184591 4764 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184603 4764 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184612 4764 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184621 4764 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184630 4764 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.184641 4764 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184880 4764 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184901 4764 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184922 4764 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184938 4764 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184960 4764 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184973 4764 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.184989 4764 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185003 4764 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185015 4764 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185029 4764 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185042 4764 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185054 4764 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185066 4764 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185077 4764 flags.go:64] FLAG: --cgroup-root="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185086 4764 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185096 4764 flags.go:64] FLAG: --client-ca-file="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185108 4764 flags.go:64] FLAG: --cloud-config="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185117 4764 flags.go:64] FLAG: --cloud-provider="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185126 4764 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185139 4764 flags.go:64] FLAG: --cluster-domain="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185148 4764 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185158 4764 flags.go:64] FLAG: --config-dir="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185167 4764 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185176 4764 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185219 4764 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185230 4764 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185240 4764 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185250 4764 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185281 4764 flags.go:64] FLAG: --contention-profiling="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185290 4764 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185301 4764 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185311 4764 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185320 4764 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185332 4764 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185342 4764 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185353 4764 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185362 4764 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185372 4764 flags.go:64] FLAG: --enable-server="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185382 4764 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185397 4764 flags.go:64] FLAG: --event-burst="100" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185410 4764 flags.go:64] FLAG: --event-qps="50" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185420 4764 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185431 4764 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185442 4764 flags.go:64] FLAG: --eviction-hard="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185482 4764 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185492 4764 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185501 4764 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185511 4764 flags.go:64] FLAG: --eviction-soft="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185523 4764 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185533 4764 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185542 4764 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185552 4764 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185561 4764 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185571 4764 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185581 4764 flags.go:64] FLAG: --feature-gates="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185594 4764 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185604 4764 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185613 4764 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185622 4764 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185632 4764 flags.go:64] FLAG: --healthz-port="10248" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185642 4764 flags.go:64] FLAG: --help="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185651 4764 flags.go:64] FLAG: --hostname-override="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185660 4764 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185669 4764 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185678 4764 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185687 4764 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185695 4764 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185705 4764 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185714 4764 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185723 4764 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185731 4764 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185741 4764 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185751 4764 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185760 4764 flags.go:64] FLAG: --kube-reserved="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185770 4764 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185778 4764 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185808 4764 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185818 4764 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185827 4764 flags.go:64] FLAG: --lock-file="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185836 4764 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185846 4764 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185855 4764 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185880 4764 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185889 4764 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185898 4764 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185908 4764 flags.go:64] FLAG: --logging-format="text" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185917 4764 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185926 4764 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185935 4764 flags.go:64] FLAG: --manifest-url="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185944 4764 flags.go:64] FLAG: --manifest-url-header="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185957 4764 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185966 4764 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185978 4764 flags.go:64] FLAG: --max-pods="110" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185989 4764 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.185999 4764 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186008 4764 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186018 4764 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186027 4764 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186037 4764 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186046 4764 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186075 4764 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186087 4764 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186099 4764 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186111 4764 flags.go:64] FLAG: --pod-cidr="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186122 4764 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186143 4764 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186153 4764 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186165 4764 flags.go:64] FLAG: --pods-per-core="0" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186176 4764 flags.go:64] FLAG: --port="10250" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186187 4764 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186198 4764 flags.go:64] FLAG: --provider-id="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186209 4764 flags.go:64] FLAG: --qos-reserved="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186223 4764 flags.go:64] FLAG: --read-only-port="10255" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186236 4764 flags.go:64] FLAG: --register-node="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186248 4764 flags.go:64] FLAG: --register-schedulable="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186259 4764 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186280 4764 flags.go:64] FLAG: --registry-burst="10" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186291 4764 flags.go:64] FLAG: --registry-qps="5" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186303 4764 flags.go:64] FLAG: --reserved-cpus="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186314 4764 flags.go:64] FLAG: --reserved-memory="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186329 4764 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186342 4764 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186355 4764 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186367 4764 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186379 4764 flags.go:64] FLAG: --runonce="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186390 4764 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186402 4764 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186412 4764 flags.go:64] FLAG: --seccomp-default="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186421 4764 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186432 4764 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186475 4764 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186485 4764 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186495 4764 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186503 4764 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186514 4764 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186523 4764 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186532 4764 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186541 4764 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186551 4764 flags.go:64] FLAG: --system-cgroups="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186560 4764 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186577 4764 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186588 4764 flags.go:64] FLAG: --tls-cert-file="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186598 4764 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186609 4764 flags.go:64] FLAG: --tls-min-version="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186620 4764 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186633 4764 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186642 4764 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186651 4764 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186662 4764 flags.go:64] FLAG: --v="2" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186675 4764 flags.go:64] FLAG: --version="false" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186688 4764 flags.go:64] FLAG: --vmodule="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186699 4764 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.186709 4764 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186924 4764 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186935 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186944 4764 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186953 4764 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186963 4764 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186973 4764 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186981 4764 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186990 4764 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.186998 4764 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187007 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187016 4764 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187025 4764 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187034 4764 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187042 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187051 4764 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187059 4764 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187067 4764 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187075 4764 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187083 4764 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187091 4764 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187099 4764 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187107 4764 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187115 4764 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187124 4764 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187133 4764 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187141 4764 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187149 4764 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187157 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187165 4764 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187173 4764 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187181 4764 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187191 4764 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187198 4764 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187206 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187216 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187226 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187239 4764 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187252 4764 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187265 4764 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187276 4764 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187287 4764 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187298 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187310 4764 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187323 4764 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187334 4764 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187344 4764 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187355 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187365 4764 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187378 4764 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187388 4764 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187398 4764 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187407 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187416 4764 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187426 4764 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187443 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187495 4764 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187506 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187514 4764 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187525 4764 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187535 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187545 4764 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187553 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187561 4764 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187568 4764 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187577 4764 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187586 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187595 4764 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187617 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187627 4764 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187635 4764 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.187643 4764 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.187669 4764 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.202341 4764 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.202390 4764 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202537 4764 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202552 4764 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202563 4764 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202575 4764 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202584 4764 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202592 4764 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202600 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202608 4764 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202617 4764 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202625 4764 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202634 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202642 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202650 4764 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202659 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202667 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202676 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202685 4764 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202694 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202703 4764 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202712 4764 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202721 4764 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202728 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202736 4764 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202743 4764 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202751 4764 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202759 4764 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202767 4764 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202774 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.202782 4764 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203358 4764 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203370 4764 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203380 4764 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203390 4764 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203399 4764 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203408 4764 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203418 4764 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203427 4764 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203441 4764 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203473 4764 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203483 4764 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203490 4764 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203498 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203506 4764 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203516 4764 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203528 4764 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203538 4764 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203549 4764 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203558 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203570 4764 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203578 4764 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203587 4764 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203595 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203603 4764 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203611 4764 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203619 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203627 4764 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203635 4764 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203643 4764 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203651 4764 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203660 4764 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203670 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203679 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203689 4764 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203698 4764 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203706 4764 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203716 4764 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203724 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203732 4764 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203739 4764 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203747 4764 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.203754 4764 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.203768 4764 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204050 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204063 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204071 4764 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204082 4764 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204093 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204103 4764 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204111 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204123 4764 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204131 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204139 4764 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204150 4764 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204159 4764 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204170 4764 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204179 4764 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204188 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204196 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204204 4764 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204212 4764 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204220 4764 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204229 4764 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204237 4764 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204246 4764 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204254 4764 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204262 4764 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204271 4764 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204278 4764 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204286 4764 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204294 4764 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204302 4764 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204311 4764 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204319 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204327 4764 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204334 4764 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204342 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204350 4764 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204358 4764 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204366 4764 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204374 4764 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204382 4764 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204391 4764 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204400 4764 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204407 4764 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204418 4764 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204426 4764 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204434 4764 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204473 4764 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204481 4764 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204489 4764 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204497 4764 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204504 4764 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204512 4764 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204520 4764 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204528 4764 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204535 4764 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204543 4764 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204552 4764 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204560 4764 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204567 4764 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204575 4764 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204583 4764 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204591 4764 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204599 4764 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204607 4764 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204617 4764 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204628 4764 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204638 4764 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204648 4764 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204657 4764 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204665 4764 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204673 4764 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.204682 4764 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.204694 4764 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.204946 4764 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.211408 4764 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.211572 4764 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.213704 4764 server.go:997] "Starting client certificate rotation" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.213776 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.214064 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-15 07:07:41.130345931 +0000 UTC Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.214330 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.240083 4764 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.242401 4764 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.245791 4764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.264902 4764 log.go:25] "Validated CRI v1 runtime API" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.309119 4764 log.go:25] "Validated CRI v1 image API" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.311672 4764 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.317600 4764 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-07-11-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.317647 4764 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.341099 4764 manager.go:217] Machine: {Timestamp:2026-01-27 07:16:28.337143324 +0000 UTC m=+0.932765900 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8df70a0f-a43f-46e8-bc96-59789f0d9a1b BootID:f93d49ef-43cd-4375-924e-313a995dd43d Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c8:a6:1b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c8:a6:1b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:45:77:2c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ec:44:92 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:98:d9:95 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4e:06:a5 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ca:32:18:2c:77:6c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:33:db:f7:e7:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.341394 4764 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.341643 4764 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.343648 4764 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.344007 4764 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.344081 4764 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.344424 4764 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.344476 4764 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.344984 4764 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.345035 4764 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.345784 4764 state_mem.go:36] "Initialized new in-memory state store" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.345961 4764 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.351055 4764 kubelet.go:418] "Attempting to sync node with API server" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.351093 4764 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.351120 4764 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.351141 4764 kubelet.go:324] "Adding apiserver pod source" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.351163 4764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.355940 4764 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.356855 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.356940 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.356970 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.357097 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.357242 4764 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.360153 4764 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.362850 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.362902 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.362935 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.362957 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.362987 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363004 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363020 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363047 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363067 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363088 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363112 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.363130 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.364128 4764 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.365077 4764 server.go:1280] "Started kubelet" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.366365 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.367729 4764 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.367704 4764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.369279 4764 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374288 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374322 4764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374429 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:46:16.84463001 +0000 UTC Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374677 4764 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374705 4764 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.374829 4764 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.374993 4764 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.375701 4764 server.go:460] "Adding debug handlers to kubelet server" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.375823 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.376891 4764 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.376989 4764 factory.go:55] Registering systemd factory Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.377002 4764 factory.go:221] Registration of the systemd container factory successfully Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.382137 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.382255 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.382372 4764 factory.go:153] Registering CRI-O factory Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.382390 4764 factory.go:221] Registration of the crio container factory successfully Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.382415 4764 factory.go:103] Registering Raw factory Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.380917 4764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e853a2431bd25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:16:28.365020453 +0000 UTC m=+0.960643019,LastTimestamp:2026-01-27 07:16:28.365020453 +0000 UTC m=+0.960643019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.382429 4764 manager.go:1196] Started watching for new ooms in manager Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.384893 4764 manager.go:319] Starting recovery of all containers Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.402761 4764 manager.go:324] Recovery completed Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.405550 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.405652 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.405685 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409678 4764 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409724 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409739 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409750 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409762 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409774 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409786 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409811 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409822 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409835 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409847 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409861 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409871 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409883 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409893 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409905 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409915 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409926 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409936 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409948 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409958 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409969 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409978 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.409988 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410001 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410011 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410024 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410034 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410045 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410055 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410065 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410075 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410086 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410098 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410108 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410118 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410128 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410138 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410148 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410159 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410169 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410180 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410191 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410201 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410210 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410221 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410230 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410240 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410251 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410262 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410277 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410288 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410300 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410310 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410320 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410330 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410340 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410351 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410360 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410371 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410381 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410392 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410402 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410413 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410422 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410444 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410471 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410484 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410493 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410502 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410514 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410524 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410539 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410550 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410561 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410572 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410581 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410591 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410601 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410610 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410620 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410631 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410641 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410653 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410663 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410675 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410685 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410695 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410707 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410718 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410729 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410740 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410768 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410779 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410794 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410805 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410816 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410827 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410838 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410848 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410864 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410874 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410892 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410903 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410918 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410932 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410946 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410960 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410974 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410985 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.410997 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411009 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411019 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411030 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411039 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411049 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411059 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411078 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411088 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411098 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411108 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411119 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411129 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411153 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411164 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411173 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411185 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411195 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411204 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411214 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411225 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411263 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411273 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411282 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411292 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411303 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411313 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411325 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411335 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411345 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411355 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411366 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411378 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411388 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411398 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411409 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411419 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411428 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411443 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411472 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411483 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411493 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411503 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411513 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411523 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411533 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411543 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411554 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411570 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411580 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411591 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411600 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411610 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411621 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411630 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411640 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411650 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411659 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411669 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411679 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411689 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411699 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411710 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411720 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411731 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411741 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411751 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411761 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411771 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411780 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411789 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411799 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411809 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411819 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411829 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411839 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411850 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411859 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411869 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411878 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411889 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411899 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411910 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411919 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411929 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411940 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411951 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411960 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.411975 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412010 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412023 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412033 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412045 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412055 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412064 4764 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412073 4764 reconstruct.go:97] "Volume reconstruction finished" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.412080 4764 reconciler.go:26] "Reconciler: start to sync state" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.419570 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.421377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.421415 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.421430 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.422433 4764 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.422480 4764 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.422587 4764 state_mem.go:36] "Initialized new in-memory state store" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.434836 4764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.435202 4764 policy_none.go:49] "None policy: Start" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.436498 4764 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.436545 4764 state_mem.go:35] "Initializing new in-memory state store" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.436974 4764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.437040 4764 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.437095 4764 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.437193 4764 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.439958 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.440086 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.474948 4764 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.506609 4764 manager.go:334] "Starting Device Plugin manager" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.506881 4764 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.506902 4764 server.go:79] "Starting device plugin registration server" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.507653 4764 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.507687 4764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.507858 4764 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.507966 4764 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.507984 4764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.517958 4764 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.538188 4764 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.538330 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.540263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.540315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.540327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.540557 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.541052 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.541155 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542154 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542217 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542380 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542405 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542415 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542481 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542563 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.542607 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544344 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544354 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544376 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544389 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544391 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544649 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544770 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.544827 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.545915 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.545947 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.545963 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546114 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546300 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546353 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546745 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546756 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546842 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546867 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.546880 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547107 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547147 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547496 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547525 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547535 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.547991 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.548009 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.548017 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.576852 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.608697 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.610372 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.610914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.610933 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.610970 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.611719 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.615943 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616058 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616086 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616111 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616141 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616183 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616206 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616225 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616256 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616276 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616296 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616316 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616336 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616354 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.616371 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.717932 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718000 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718019 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718042 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718061 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718078 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718108 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718126 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718145 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718168 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718188 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718196 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718263 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718285 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718351 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718366 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718193 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718404 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718485 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718274 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718340 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718355 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718222 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718667 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718694 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718720 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718829 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718876 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.718913 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.812791 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.815020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.815075 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.815093 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.815129 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.816224 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.870659 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.886489 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.905621 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.923885 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d114aee68add3202e002480fd5850d2bed3bfdc32d8da5bccc53f5908967f1ca WatchSource:0}: Error finding container d114aee68add3202e002480fd5850d2bed3bfdc32d8da5bccc53f5908967f1ca: Status 404 returned error can't find the container with id d114aee68add3202e002480fd5850d2bed3bfdc32d8da5bccc53f5908967f1ca Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.924269 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-aa18f47ea8aa2a89998a1effba5a37904bd8823199fb0837e08d13d507c75051 WatchSource:0}: Error finding container aa18f47ea8aa2a89998a1effba5a37904bd8823199fb0837e08d13d507c75051: Status 404 returned error can't find the container with id aa18f47ea8aa2a89998a1effba5a37904bd8823199fb0837e08d13d507c75051 Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.931122 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.931272 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d8afb16ec1d9eed708e4fbe8fcf9a8ac0a1aad1e34fed1c26eafd38bddb09dea WatchSource:0}: Error finding container d8afb16ec1d9eed708e4fbe8fcf9a8ac0a1aad1e34fed1c26eafd38bddb09dea: Status 404 returned error can't find the container with id d8afb16ec1d9eed708e4fbe8fcf9a8ac0a1aad1e34fed1c26eafd38bddb09dea Jan 27 07:16:28 crc kubenswrapper[4764]: I0127 07:16:28.938421 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.945439 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d27237a59302f43f90019e7c02707e708f1a88c61a3228e154d7d084ed4493c8 WatchSource:0}: Error finding container d27237a59302f43f90019e7c02707e708f1a88c61a3228e154d7d084ed4493c8: Status 404 returned error can't find the container with id d27237a59302f43f90019e7c02707e708f1a88c61a3228e154d7d084ed4493c8 Jan 27 07:16:28 crc kubenswrapper[4764]: W0127 07:16:28.966824 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-13b523adb894257aa14d6673bd8ea2f270da4fc2aeff8e32f0d343c1f7426c24 WatchSource:0}: Error finding container 13b523adb894257aa14d6673bd8ea2f270da4fc2aeff8e32f0d343c1f7426c24: Status 404 returned error can't find the container with id 13b523adb894257aa14d6673bd8ea2f270da4fc2aeff8e32f0d343c1f7426c24 Jan 27 07:16:28 crc kubenswrapper[4764]: E0127 07:16:28.978386 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.216458 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.218315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.218357 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.218370 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.218400 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.218931 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Jan 27 07:16:29 crc kubenswrapper[4764]: W0127 07:16:29.365145 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.365239 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.369505 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.374921 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:47:16.456915366 +0000 UTC Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.442921 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa18f47ea8aa2a89998a1effba5a37904bd8823199fb0837e08d13d507c75051"} Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.446238 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d114aee68add3202e002480fd5850d2bed3bfdc32d8da5bccc53f5908967f1ca"} Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.447759 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13b523adb894257aa14d6673bd8ea2f270da4fc2aeff8e32f0d343c1f7426c24"} Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.449305 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d27237a59302f43f90019e7c02707e708f1a88c61a3228e154d7d084ed4493c8"} Jan 27 07:16:29 crc kubenswrapper[4764]: I0127 07:16:29.450529 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d8afb16ec1d9eed708e4fbe8fcf9a8ac0a1aad1e34fed1c26eafd38bddb09dea"} Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.530788 4764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e853a2431bd25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:16:28.365020453 +0000 UTC m=+0.960643019,LastTimestamp:2026-01-27 07:16:28.365020453 +0000 UTC m=+0.960643019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:16:29 crc kubenswrapper[4764]: W0127 07:16:29.627203 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.627320 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:29 crc kubenswrapper[4764]: W0127 07:16:29.657655 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.657794 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.779415 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Jan 27 07:16:29 crc kubenswrapper[4764]: W0127 07:16:29.846284 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:29 crc kubenswrapper[4764]: E0127 07:16:29.846390 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.019738 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.021959 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.022023 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.022038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.022078 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:30 crc kubenswrapper[4764]: E0127 07:16:30.022864 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.250732 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:16:30 crc kubenswrapper[4764]: E0127 07:16:30.251738 4764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.369862 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.375853 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:35:52.332688287 +0000 UTC Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.457051 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.457124 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.457141 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.459084 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661" exitCode=0 Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.459170 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.459313 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.460692 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.460727 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.460744 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.462949 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.463698 4764 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17" exitCode=0 Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.463736 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.463825 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.465070 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.465129 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.465141 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.470323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.470351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.470363 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.472312 4764 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495" exitCode=0 Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.472391 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.472396 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.473300 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.473321 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.473330 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.474766 4764 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c" exitCode=0 Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.474820 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c"} Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.474920 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.476541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.476576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:30 crc kubenswrapper[4764]: I0127 07:16:30.476590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.368998 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.376257 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:13:07.616585511 +0000 UTC Jan 27 07:16:31 crc kubenswrapper[4764]: E0127 07:16:31.380093 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.481527 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.481591 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.481608 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.481592 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.482613 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.482651 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.482662 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.486296 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.486569 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490067 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490133 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490156 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490709 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490736 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.490746 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.493747 4764 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c" exitCode=0 Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.493792 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.493922 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.495060 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.495100 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.495163 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.496469 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a"} Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.496552 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.498162 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.498190 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.498201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.623311 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.625370 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.625429 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.625479 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:31 crc kubenswrapper[4764]: I0127 07:16:31.625521 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:31 crc kubenswrapper[4764]: E0127 07:16:31.626595 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Jan 27 07:16:31 crc kubenswrapper[4764]: W0127 07:16:31.758967 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:31 crc kubenswrapper[4764]: E0127 07:16:31.759121 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:31 crc kubenswrapper[4764]: W0127 07:16:31.842837 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:31 crc kubenswrapper[4764]: E0127 07:16:31.842935 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:31 crc kubenswrapper[4764]: W0127 07:16:31.907674 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:31 crc kubenswrapper[4764]: E0127 07:16:31.907785 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:32 crc kubenswrapper[4764]: W0127 07:16:32.103132 4764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:32 crc kubenswrapper[4764]: E0127 07:16:32.103563 4764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.369284 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.376522 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:04:26.310980322 +0000 UTC Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.502337 4764 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4" exitCode=0 Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.502385 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4"} Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.502466 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.503337 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.503366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.503377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506549 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55"} Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506571 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46"} Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506601 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506648 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506649 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506657 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.506714 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.507840 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.507870 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.507881 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508361 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508371 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508768 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.508814 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.511453 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.511483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:32 crc kubenswrapper[4764]: I0127 07:16:32.511497 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.247031 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.377672 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:27:31.131907576 +0000 UTC Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514505 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d91d8f3d3f5fdb82fe9c5366c970b42d538d71a763eda361b412ce4349cf2144"} Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514554 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5d530f1c58454f3ee757a0bc58fa821204e378d643a7bb3dd01d122feaf02497"} Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514568 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4db8a05b1bad2810fd291ace5f0166b7061681ff4757d1b89368e48fd293f28d"} Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514576 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"060deb63e32b34b747696d5c32a61c6cf252c55733c7c0f5a3922fe54982a781"} Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514616 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.514708 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.515516 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.515550 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:33 crc kubenswrapper[4764]: I0127 07:16:33.515559 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.378558 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:56:51.593631937 +0000 UTC Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.494888 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.528316 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65e3975853f90b5a1f71fa29f8ea50579644ce15a2c72a0f9debdfdcf72f2387"} Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.528345 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.528504 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.529842 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.529922 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.529947 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.530203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.530263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.530282 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.827429 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.828701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.828733 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.828741 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:34 crc kubenswrapper[4764]: I0127 07:16:34.828762 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:35 crc kubenswrapper[4764]: I0127 07:16:35.379221 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:55:22.595367105 +0000 UTC Jan 27 07:16:35 crc kubenswrapper[4764]: I0127 07:16:35.530884 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:35 crc kubenswrapper[4764]: I0127 07:16:35.532105 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:35 crc kubenswrapper[4764]: I0127 07:16:35.532169 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:35 crc kubenswrapper[4764]: I0127 07:16:35.532181 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.379853 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:02:48.868908151 +0000 UTC Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.421504 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.421739 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.423190 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.423223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.423234 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.773427 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.773817 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.776004 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.776717 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:36 crc kubenswrapper[4764]: I0127 07:16:36.776785 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.282524 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.282854 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.284624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.284678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.284690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:37 crc kubenswrapper[4764]: I0127 07:16:37.380903 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:39:52.981421261 +0000 UTC Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.103034 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.103264 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.104844 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.104898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.104918 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:38 crc kubenswrapper[4764]: I0127 07:16:38.381907 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:09:38.997316756 +0000 UTC Jan 27 07:16:38 crc kubenswrapper[4764]: E0127 07:16:38.518075 4764 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.267334 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.267845 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.269869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.269928 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.269951 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:39 crc kubenswrapper[4764]: I0127 07:16:39.382088 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:37:26.022289817 +0000 UTC Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.382903 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:24:30.49746804 +0000 UTC Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.926387 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.926603 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.928171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.928230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.928266 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:40 crc kubenswrapper[4764]: I0127 07:16:40.931471 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.383292 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:28:18.771912703 +0000 UTC Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.421230 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.552197 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.554387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.554431 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.554464 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:41 crc kubenswrapper[4764]: I0127 07:16:41.560250 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.064197 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.064717 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.066850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.066906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.066919 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.117221 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.383532 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:32:04.990731133 +0000 UTC Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.554206 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.554468 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.555454 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.555496 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.555509 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.556172 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.556253 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.556267 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:42 crc kubenswrapper[4764]: I0127 07:16:42.576335 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.369707 4764 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.383989 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:44:18.67708357 +0000 UTC Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.435484 4764 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.435567 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.444610 4764 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.444713 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.558836 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.560625 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55" exitCode=255 Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.560704 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55"} Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.560804 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.560846 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.560868 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561836 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561851 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561859 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561868 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561872 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561921 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.561935 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:43 crc kubenswrapper[4764]: I0127 07:16:43.562449 4764 scope.go:117] "RemoveContainer" containerID="8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.384766 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:24:33.074549999 +0000 UTC Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.422183 4764 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.422315 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.565552 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.568040 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e"} Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.568238 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.569501 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.569574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:44 crc kubenswrapper[4764]: I0127 07:16:44.569650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:45 crc kubenswrapper[4764]: I0127 07:16:45.385690 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:32:38.546477427 +0000 UTC Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.386498 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:23:48.684120506 +0000 UTC Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.429528 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.429692 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.429828 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.430774 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.430816 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.430832 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.436458 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.573553 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.574759 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.574811 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:46 crc kubenswrapper[4764]: I0127 07:16:46.574822 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:47 crc kubenswrapper[4764]: I0127 07:16:47.386783 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:21:15.735167417 +0000 UTC Jan 27 07:16:47 crc kubenswrapper[4764]: I0127 07:16:47.575694 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:47 crc kubenswrapper[4764]: I0127 07:16:47.576470 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:47 crc kubenswrapper[4764]: I0127 07:16:47.576509 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:47 crc kubenswrapper[4764]: I0127 07:16:47.576524 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.387734 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:40:04.98946309 +0000 UTC Jan 27 07:16:48 crc kubenswrapper[4764]: E0127 07:16:48.427601 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 07:16:48 crc kubenswrapper[4764]: E0127 07:16:48.432691 4764 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.433248 4764 trace.go:236] Trace[209372166]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:16:38.029) (total time: 10403ms): Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[209372166]: ---"Objects listed" error: 10403ms (07:16:48.433) Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[209372166]: [10.403655576s] [10.403655576s] END Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.433453 4764 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.434001 4764 trace.go:236] Trace[1476408321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:16:35.761) (total time: 12672ms): Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[1476408321]: ---"Objects listed" error: 12672ms (07:16:48.433) Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[1476408321]: [12.672408427s] [12.672408427s] END Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.434025 4764 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.434205 4764 trace.go:236] Trace[1554890820]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:16:38.234) (total time: 10199ms): Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[1554890820]: ---"Objects listed" error: 10199ms (07:16:48.434) Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[1554890820]: [10.199371524s] [10.199371524s] END Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.434236 4764 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.435547 4764 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.435961 4764 trace.go:236] Trace[706780365]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:16:35.020) (total time: 13414ms): Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[706780365]: ---"Objects listed" error: 13414ms (07:16:48.435) Jan 27 07:16:48 crc kubenswrapper[4764]: Trace[706780365]: [13.41495s] [13.41495s] END Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.436002 4764 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.442612 4764 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.459143 4764 csr.go:261] certificate signing request csr-mpsjl is approved, waiting to be issued Jan 27 07:16:48 crc kubenswrapper[4764]: I0127 07:16:48.471140 4764 csr.go:257] certificate signing request csr-mpsjl is issued Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.364316 4764 apiserver.go:52] "Watching apiserver" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.369031 4764 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.370305 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-4sbqw","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.371131 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.371163 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.371613 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.372100 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.372165 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.372494 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.372555 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.372580 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.372639 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.372800 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.374505 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.376126 4764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.376869 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.376873 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.377047 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.377155 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.377348 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.377582 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.378074 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.378073 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.378673 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.378726 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.380655 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.387988 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:35:55.113102415 +0000 UTC Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.395128 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.407187 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.421872 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.435217 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.442949 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.442993 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443016 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443034 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443055 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443073 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443091 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443110 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443131 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443151 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443173 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443192 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443211 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443231 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443254 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443273 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443291 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443315 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443338 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443356 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443375 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443394 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443411 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443429 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443463 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443479 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443499 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443519 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443540 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443573 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443592 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443617 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443635 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443652 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443720 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443748 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443820 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443845 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443899 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443938 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443899 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443959 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443978 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443997 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444013 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444029 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444045 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444071 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444087 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444164 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444182 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444202 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444223 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444241 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444266 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444289 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444355 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444375 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444391 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444408 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444428 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444463 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444482 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444500 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444521 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444540 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444557 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444574 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444594 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444613 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444632 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444648 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444668 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444713 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444729 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444833 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444851 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444870 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444888 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444909 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444927 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444947 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444965 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444983 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445004 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445030 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445056 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445075 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445091 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445108 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445133 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445160 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445175 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445194 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445216 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445240 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445264 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445286 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445305 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445331 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445359 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445383 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445403 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445430 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445473 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445494 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445518 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445547 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445572 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445604 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445628 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445652 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445673 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445696 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445713 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445770 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445790 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445807 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445826 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445844 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445864 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445881 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445900 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445919 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445936 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445953 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445970 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445988 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446004 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446021 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446036 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446053 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446071 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446090 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446111 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446128 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446147 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446165 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446181 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446198 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446226 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446244 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446263 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446281 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446301 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446321 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446340 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446358 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446375 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446393 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446414 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446432 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446470 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446489 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446522 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446538 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446557 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446575 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446592 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446610 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446628 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446647 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446667 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446694 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446714 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446733 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446753 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446771 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446787 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446803 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446820 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446837 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446853 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446872 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446890 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446906 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446922 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446944 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446962 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446980 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446998 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447019 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447038 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447059 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447081 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447107 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447147 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447172 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447191 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447210 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447227 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447250 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447269 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447287 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447311 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447334 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447362 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447406 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447455 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrr7\" (UniqueName: \"kubernetes.io/projected/2794da51-6825-4d02-8ed3-bc0ff88fb961-kube-api-access-pgrr7\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447482 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447510 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447532 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447551 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447573 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447621 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447638 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2794da51-6825-4d02-8ed3-bc0ff88fb961-hosts-file\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447661 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447681 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447698 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447715 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447735 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447754 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447795 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447808 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.449204 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.462236 4764 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.443972 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444183 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444385 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444577 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444622 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444711 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444778 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444826 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.444922 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445000 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445082 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445103 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445209 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445260 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445308 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445364 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445466 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465576 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445540 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445615 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445696 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445707 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465592 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445709 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445877 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446199 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446228 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446337 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446377 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446571 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446613 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446636 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446860 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446929 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446924 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.446973 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465874 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447036 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447057 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447320 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.447410 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.448289 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.448809 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.449320 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.450635 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.450723 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.451390 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452016 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452013 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452296 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452312 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452456 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452498 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452696 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452727 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.452856 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.453029 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.453052 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.453183 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.453326 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.453993 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.454044 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.454173 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.454659 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.455117 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.455176 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.455367 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.455911 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456021 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456300 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456454 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456474 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456628 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456941 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.456991 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457336 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457348 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457394 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457471 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457600 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457650 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457696 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.457727 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.458092 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.458337 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.458605 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.458811 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.459253 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.459629 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461283 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461309 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461285 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.461582 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461622 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461646 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.461690 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.462115 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.462626 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.462653 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463000 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463002 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463028 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463050 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463103 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463260 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463501 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463534 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463717 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.463786 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464076 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464132 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464267 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464462 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464698 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.464895 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465207 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465245 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465233 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.445508 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465706 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.465939 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.466260 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.466429 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.466778 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.466867 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.467053 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.467200 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.467208 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.467304 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:16:49.967261005 +0000 UTC m=+22.562883531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.467508 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.467983 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.468626 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469042 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469196 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469285 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.468797 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.468897 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469497 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469520 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469572 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469604 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469728 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469681 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.469815 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.470122 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.470183 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.470502 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.470527 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.470626 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.470685 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.470913 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.471078 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:49.971049788 +0000 UTC m=+22.566672314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.471123 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:49.97111509 +0000 UTC m=+22.566737616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.471170 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.471254 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.471340 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.471431 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:49.971402418 +0000 UTC m=+22.567024944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.471615 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.471827 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.472106 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.472314 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.472532 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.472972 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.475320 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.476473 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.478892 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.479339 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.480748 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.481346 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.482483 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 07:11:48 +0000 UTC, rotation deadline is 2026-11-17 21:36:13.455789889 +0000 UTC Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.482552 4764 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7070h19m23.973240746s for next certificate rotation Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.482555 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.482745 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.484590 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.484668 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.485815 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.486511 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.486714 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.486933 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.487472 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.487695 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.487729 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.487776 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.487969 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.488716 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.488739 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.488758 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.488844 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:49.988806603 +0000 UTC m=+22.584429129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.488965 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.488998 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.490288 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.490798 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.490929 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.490966 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.491125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.492402 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.493936 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.496041 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.496086 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.496246 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.497509 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.497983 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.501085 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.501705 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.504068 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.506551 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.507462 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.507763 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.509425 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.509943 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.512008 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.513429 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.516071 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.517076 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.520672 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.533552 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.541677 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.541865 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549268 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrr7\" (UniqueName: \"kubernetes.io/projected/2794da51-6825-4d02-8ed3-bc0ff88fb961-kube-api-access-pgrr7\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549334 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549387 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549413 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2794da51-6825-4d02-8ed3-bc0ff88fb961-hosts-file\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549520 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549586 4764 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549630 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549653 4764 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549668 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549679 4764 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549690 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549700 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549710 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549721 4764 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549730 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549739 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549730 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549750 4764 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549857 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549876 4764 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549895 4764 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549754 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2794da51-6825-4d02-8ed3-bc0ff88fb961-hosts-file\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549936 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549975 4764 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.549994 4764 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550009 4764 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550021 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550031 4764 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550042 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550052 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550061 4764 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550070 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550080 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550090 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550100 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550112 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550124 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550155 4764 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550177 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550187 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550200 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550215 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550229 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550241 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550255 4764 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550267 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550279 4764 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550291 4764 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550305 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550321 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550336 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550348 4764 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550363 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550376 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550387 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550398 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550411 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550420 4764 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550428 4764 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550458 4764 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550468 4764 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550478 4764 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550487 4764 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550497 4764 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550507 4764 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550517 4764 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550526 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550537 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550549 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550559 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550570 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550579 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550590 4764 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550600 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550609 4764 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550617 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550627 4764 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550635 4764 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550646 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550655 4764 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550664 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550672 4764 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550682 4764 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550691 4764 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550699 4764 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550709 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550721 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550735 4764 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550747 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550760 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550773 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550786 4764 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550798 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550810 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550823 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550834 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550844 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550853 4764 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550864 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550875 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550885 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550896 4764 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550909 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550921 4764 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.550933 4764 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551676 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551694 4764 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551707 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551718 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551730 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551741 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551751 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551762 4764 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551773 4764 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551782 4764 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551792 4764 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551802 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551814 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551826 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551868 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551884 4764 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551895 4764 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551907 4764 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551920 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551931 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551943 4764 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551955 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551967 4764 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551978 4764 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.551990 4764 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552001 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552011 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552023 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552035 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552046 4764 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552057 4764 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552068 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552078 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552088 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552098 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552108 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552117 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552129 4764 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552140 4764 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552151 4764 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552161 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552171 4764 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552180 4764 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552190 4764 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552200 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552210 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552220 4764 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552230 4764 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552240 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552284 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552295 4764 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552304 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552314 4764 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552324 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552335 4764 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552345 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552355 4764 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552366 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552377 4764 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552389 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552403 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552414 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552454 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552466 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552477 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552492 4764 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552506 4764 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552518 4764 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552530 4764 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552542 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552552 4764 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552565 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552576 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552588 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552601 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552613 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552624 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552635 4764 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552646 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552658 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552783 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552804 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552817 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552828 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552838 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552850 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552859 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552868 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552877 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552886 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552894 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552902 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552911 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.552920 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.567360 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrr7\" (UniqueName: \"kubernetes.io/projected/2794da51-6825-4d02-8ed3-bc0ff88fb961-kube-api-access-pgrr7\") pod \"node-resolver-4sbqw\" (UID: \"2794da51-6825-4d02-8ed3-bc0ff88fb961\") " pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.588091 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.589366 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.595327 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" exitCode=255 Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.595405 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e"} Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.595494 4764 scope.go:117] "RemoveContainer" containerID="8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.613579 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.623033 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.635315 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.649358 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.658429 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.669934 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.679307 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.691554 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.699803 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.706778 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.714696 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4sbqw" Jan 27 07:16:49 crc kubenswrapper[4764]: W0127 07:16:49.729376 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-274a2348c8ae7832ed4cb6f44d6fa8558e48cdf49d8d621b08adeacb5f0eeb86 WatchSource:0}: Error finding container 274a2348c8ae7832ed4cb6f44d6fa8558e48cdf49d8d621b08adeacb5f0eeb86: Status 404 returned error can't find the container with id 274a2348c8ae7832ed4cb6f44d6fa8558e48cdf49d8d621b08adeacb5f0eeb86 Jan 27 07:16:49 crc kubenswrapper[4764]: W0127 07:16:49.730565 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2794da51_6825_4d02_8ed3_bc0ff88fb961.slice/crio-9ea44a61f13517af2e07db453ef2bf48f9043982d3607e8c3ac17f966acedb8a WatchSource:0}: Error finding container 9ea44a61f13517af2e07db453ef2bf48f9043982d3607e8c3ac17f966acedb8a: Status 404 returned error can't find the container with id 9ea44a61f13517af2e07db453ef2bf48f9043982d3607e8c3ac17f966acedb8a Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.782750 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:16:49 crc kubenswrapper[4764]: I0127 07:16:49.783081 4764 scope.go:117] "RemoveContainer" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" Jan 27 07:16:49 crc kubenswrapper[4764]: E0127 07:16:49.783350 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.058087 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.058182 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.058218 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.058244 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058275 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:16:51.058243446 +0000 UTC m=+23.653865972 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.058323 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058342 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058388 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058412 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058427 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058464 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058478 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:51.058452541 +0000 UTC m=+23.654075057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058489 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058506 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058512 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:51.058490722 +0000 UTC m=+23.654113438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058428 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058552 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:51.058540784 +0000 UTC m=+23.654163480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.058575 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:51.058566914 +0000 UTC m=+23.654189680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.215329 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2dvbb"] Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.215750 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.218672 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lh5rf"] Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.219687 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.221781 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.222015 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.222166 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.222326 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.222838 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k8qgf"] Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.223935 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.224195 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.224562 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.224699 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.226114 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gwmsf"] Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.228175 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.231190 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.231490 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232980 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.233015 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.233180 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232482 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.233382 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232657 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232888 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232922 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232958 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.232011 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.239196 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.259144 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.278911 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.311467 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:43Z\\\",\\\"message\\\":\\\"W0127 07:16:32.355666 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 07:16:32.356320 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769498192 cert, and key in /tmp/serving-cert-3447461610/serving-signer.crt, /tmp/serving-cert-3447461610/serving-signer.key\\\\nI0127 07:16:32.581338 1 observer_polling.go:159] Starting file observer\\\\nW0127 07:16:32.584287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 07:16:32.584576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:32.587998 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3447461610/tls.crt::/tmp/serving-cert-3447461610/tls.key\\\\\\\"\\\\nF0127 07:16:43.198282 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.346329 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.357879 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361140 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361176 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361199 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-multus-certs\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361220 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a061a513-f05f-4aa7-8310-5e418f3f747d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361270 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6fw\" (UniqueName: \"kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361286 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a061a513-f05f-4aa7-8310-5e418f3f747d-rootfs\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361304 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8phc\" (UniqueName: \"kubernetes.io/projected/b8be2cdf-f587-4704-9020-dcb7c8ced33d-kube-api-access-f8phc\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361469 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361522 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361557 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361614 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361642 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-os-release\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361695 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-cnibin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361713 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-multus-daemon-config\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361838 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-system-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361917 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361947 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-netns\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361969 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-conf-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.361992 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-etc-kubernetes\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362116 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-k8s-cni-cncf-io\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362181 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362244 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-multus\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362281 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-hostroot\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362310 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362339 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-binary-copy\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362388 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362506 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362582 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362620 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-bin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362674 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-kubelet\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362730 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99ff\" (UniqueName: \"kubernetes.io/projected/e936b8fc-81d9-4222-a66f-742b2db87386-kube-api-access-d99ff\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362766 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362795 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362828 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-os-release\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362854 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-cni-binary-copy\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362889 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362920 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362957 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-socket-dir-parent\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.362992 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363029 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363068 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363108 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-system-cni-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363142 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363173 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363203 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363234 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a061a513-f05f-4aa7-8310-5e418f3f747d-proxy-tls\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363260 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqqx\" (UniqueName: \"kubernetes.io/projected/a061a513-f05f-4aa7-8310-5e418f3f747d-kube-api-access-zhqqx\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.363295 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cnibin\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.383011 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.388410 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:42:08.707521635 +0000 UTC Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.403872 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.416061 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.431400 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.442399 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.443040 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.444004 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.444844 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.445598 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.446195 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.446940 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.447653 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.448433 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.449124 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.449844 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.450701 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.451290 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.451534 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.454669 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.455277 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.455899 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.456754 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.457178 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.457960 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.458619 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.459149 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.459849 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.460320 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.461101 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.461688 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.462317 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.463065 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.463704 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464116 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-socket-dir-parent\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464166 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464195 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464219 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464249 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-system-cni-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464275 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464304 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a061a513-f05f-4aa7-8310-5e418f3f747d-proxy-tls\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464337 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqqx\" (UniqueName: \"kubernetes.io/projected/a061a513-f05f-4aa7-8310-5e418f3f747d-kube-api-access-zhqqx\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464345 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464386 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464394 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464460 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464522 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464494 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-system-cni-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464546 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464475 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-socket-dir-parent\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464461 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464559 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cnibin\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464616 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cnibin\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464620 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464587 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464650 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464699 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464701 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-multus-certs\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464756 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a061a513-f05f-4aa7-8310-5e418f3f747d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464760 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-multus-certs\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464782 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6fw\" (UniqueName: \"kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464805 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a061a513-f05f-4aa7-8310-5e418f3f747d-rootfs\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464825 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8phc\" (UniqueName: \"kubernetes.io/projected/b8be2cdf-f587-4704-9020-dcb7c8ced33d-kube-api-access-f8phc\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464844 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464862 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464880 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-os-release\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464924 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464951 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464972 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-cnibin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464989 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-multus-daemon-config\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.464986 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465043 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-system-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465064 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465109 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-netns\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465132 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-conf-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465155 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-etc-kubernetes\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465175 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-k8s-cni-cncf-io\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465192 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465195 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-os-release\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465213 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-binary-copy\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465236 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465265 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-multus\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465286 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-hostroot\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465304 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465347 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465408 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465432 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465476 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-bin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465506 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465513 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-kubelet\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465541 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99ff\" (UniqueName: \"kubernetes.io/projected/e936b8fc-81d9-4222-a66f-742b2db87386-kube-api-access-d99ff\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465563 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465586 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465609 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-os-release\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465630 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-cni-binary-copy\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465652 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465723 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-multus\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465729 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465760 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465796 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-cnibin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465825 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a061a513-f05f-4aa7-8310-5e418f3f747d-rootfs\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465845 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465881 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-hostroot\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465912 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466082 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b8be2cdf-f587-4704-9020-dcb7c8ced33d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466145 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-etc-kubernetes\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466159 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a061a513-f05f-4aa7-8310-5e418f3f747d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466201 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-system-cni-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466212 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466247 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-k8s-cni-cncf-io\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466253 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466302 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-run-netns\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466342 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-multus-conf-dir\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466683 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-multus-daemon-config\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466702 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466743 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-kubelet\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.466988 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-host-var-lib-cni-bin\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467045 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467074 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-binary-copy\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467107 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e936b8fc-81d9-4222-a66f-742b2db87386-os-release\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467277 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b8be2cdf-f587-4704-9020-dcb7c8ced33d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.465474 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467471 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.467961 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e936b8fc-81d9-4222-a66f-742b2db87386-cni-binary-copy\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.468133 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.468302 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.469122 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a061a513-f05f-4aa7-8310-5e418f3f747d-proxy-tls\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.469391 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.469431 4764 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.469611 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.471931 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.473011 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.473704 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.475724 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.477111 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.477867 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.479240 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.479518 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.480233 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.480850 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.482620 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.483418 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.484123 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6fw\" (UniqueName: \"kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw\") pod \"ovnkube-node-gwmsf\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.484130 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqqx\" (UniqueName: \"kubernetes.io/projected/a061a513-f05f-4aa7-8310-5e418f3f747d-kube-api-access-zhqqx\") pod \"machine-config-daemon-k8qgf\" (UID: \"a061a513-f05f-4aa7-8310-5e418f3f747d\") " pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.484746 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99ff\" (UniqueName: \"kubernetes.io/projected/e936b8fc-81d9-4222-a66f-742b2db87386-kube-api-access-d99ff\") pod \"multus-2dvbb\" (UID: \"e936b8fc-81d9-4222-a66f-742b2db87386\") " pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.484964 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.485256 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8phc\" (UniqueName: \"kubernetes.io/projected/b8be2cdf-f587-4704-9020-dcb7c8ced33d-kube-api-access-f8phc\") pod \"multus-additional-cni-plugins-lh5rf\" (UID: \"b8be2cdf-f587-4704-9020-dcb7c8ced33d\") " pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.485627 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.486818 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.487590 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.489399 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.489936 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.490708 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.491357 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.492101 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.492731 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.493918 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.495237 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.503779 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.516025 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.529161 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.543192 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dvbb" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.547885 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e50e74ea98162513cf25b1af73fc5ec8f2546b304e8318474e736b121fb6f55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:43Z\\\",\\\"message\\\":\\\"W0127 07:16:32.355666 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 07:16:32.356320 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769498192 cert, and key in /tmp/serving-cert-3447461610/serving-signer.crt, /tmp/serving-cert-3447461610/serving-signer.key\\\\nI0127 07:16:32.581338 1 observer_polling.go:159] Starting file observer\\\\nW0127 07:16:32.584287 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 07:16:32.584576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:32.587998 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3447461610/tls.crt::/tmp/serving-cert-3447461610/tls.key\\\\\\\"\\\\nF0127 07:16:43.198282 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.550852 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" Jan 27 07:16:50 crc kubenswrapper[4764]: W0127 07:16:50.555981 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode936b8fc_81d9_4222_a66f_742b2db87386.slice/crio-ca728564ae222249e5797065c0d987f964505542ab9198d22aa59b74631912ad WatchSource:0}: Error finding container ca728564ae222249e5797065c0d987f964505542ab9198d22aa59b74631912ad: Status 404 returned error can't find the container with id ca728564ae222249e5797065c0d987f964505542ab9198d22aa59b74631912ad Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.559581 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.564040 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.567035 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.577571 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: W0127 07:16:50.580356 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda061a513_f05f_4aa7_8310_5e418f3f747d.slice/crio-9192b0f6e9cec1346c98d5406669e9a3cf231aba5e360451b3bc9f929f257a7b WatchSource:0}: Error finding container 9192b0f6e9cec1346c98d5406669e9a3cf231aba5e360451b3bc9f929f257a7b: Status 404 returned error can't find the container with id 9192b0f6e9cec1346c98d5406669e9a3cf231aba5e360451b3bc9f929f257a7b Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.601098 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.604181 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerStarted","Data":"e46fb3ec79be5475b943d7b72699e2e2860f8c3fbbec01dbebe08ab3f9bd0110"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.607352 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerStarted","Data":"ca728564ae222249e5797065c0d987f964505542ab9198d22aa59b74631912ad"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.612269 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.616919 4764 scope.go:117] "RemoveContainer" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" Jan 27 07:16:50 crc kubenswrapper[4764]: E0127 07:16:50.617186 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.623722 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.623783 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2f25ae0dd20671d3398b7ef9b4858205cc5faa9c21ddd9f8672fd9ea9e270cb7"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.626617 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"362e5ce38a6f433b85eb612a186a908dcff7f16e41ae87c9e8d43d3c40748a47"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.628526 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.628642 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.628707 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"274a2348c8ae7832ed4cb6f44d6fa8558e48cdf49d8d621b08adeacb5f0eeb86"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.631049 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"76f207b0105a912abd4edf63e3b1cfaf7e68f0a9ebc73a8f035437575dc49046"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.633230 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"9192b0f6e9cec1346c98d5406669e9a3cf231aba5e360451b3bc9f929f257a7b"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.636009 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4sbqw" event={"ID":"2794da51-6825-4d02-8ed3-bc0ff88fb961","Type":"ContainerStarted","Data":"ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.636065 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4sbqw" event={"ID":"2794da51-6825-4d02-8ed3-bc0ff88fb961","Type":"ContainerStarted","Data":"9ea44a61f13517af2e07db453ef2bf48f9043982d3607e8c3ac17f966acedb8a"} Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.636585 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.651193 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.663961 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.677528 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.691754 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.705248 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.721190 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.737908 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.749487 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.761580 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.782908 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.803533 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.826544 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.842104 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.855880 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.869887 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.882483 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.895216 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.908067 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.923664 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.939674 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.959467 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.974389 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:50 crc kubenswrapper[4764]: I0127 07:16:50.991966 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:50Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.071596 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.071733 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071776 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:16:53.071749454 +0000 UTC m=+25.667372000 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.071810 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.071844 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071872 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.071880 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071887 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071929 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071938 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071946 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071986 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:53.07197744 +0000 UTC m=+25.667599966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071988 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.071938 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.072038 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:53.072021981 +0000 UTC m=+25.667644527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.072166 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.072178 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:53.072160485 +0000 UTC m=+25.667783031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.072357 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:53.072318919 +0000 UTC m=+25.667941665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.389502 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:36:28.730020835 +0000 UTC Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.425776 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.429663 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.434861 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.438303 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.438496 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.438879 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.438945 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.438994 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:51 crc kubenswrapper[4764]: E0127 07:16:51.439045 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.444649 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.456182 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.471390 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.526575 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.547995 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.569238 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.586058 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.607638 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.623610 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.641331 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f"} Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.641453 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13"} Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.642644 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" exitCode=0 Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.642713 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.644130 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.645586 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d" exitCode=0 Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.645662 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d"} Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.647358 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerStarted","Data":"1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999"} Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.658965 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.678422 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.696665 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.720548 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.732619 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.745484 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.764200 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.777821 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.798771 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.820333 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.843430 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.861232 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.881731 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.904899 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:51 crc kubenswrapper[4764]: I0127 07:16:51.921008 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:51Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.061090 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xfxc7"] Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.061647 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.063810 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.063865 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.064312 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.064961 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.077889 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.094813 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.112542 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.128552 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.143804 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.156649 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.169709 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.184178 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.186123 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e75860c8-bd8b-434f-b2c6-91e7b7f60638-serviceca\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.186176 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rnzb\" (UniqueName: \"kubernetes.io/projected/e75860c8-bd8b-434f-b2c6-91e7b7f60638-kube-api-access-9rnzb\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.186211 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e75860c8-bd8b-434f-b2c6-91e7b7f60638-host\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.203801 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.217558 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.232524 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.258898 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.287079 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e75860c8-bd8b-434f-b2c6-91e7b7f60638-serviceca\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.287127 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rnzb\" (UniqueName: \"kubernetes.io/projected/e75860c8-bd8b-434f-b2c6-91e7b7f60638-kube-api-access-9rnzb\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.287161 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e75860c8-bd8b-434f-b2c6-91e7b7f60638-host\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.287291 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e75860c8-bd8b-434f-b2c6-91e7b7f60638-host\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.289247 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e75860c8-bd8b-434f-b2c6-91e7b7f60638-serviceca\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.302384 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.327891 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rnzb\" (UniqueName: \"kubernetes.io/projected/e75860c8-bd8b-434f-b2c6-91e7b7f60638-kube-api-access-9rnzb\") pod \"node-ca-xfxc7\" (UID: \"e75860c8-bd8b-434f-b2c6-91e7b7f60638\") " pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.336759 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xfxc7" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.360891 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.389808 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:00:45.422890583 +0000 UTC Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.662387 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.662859 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.662872 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.662885 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.665350 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.667015 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22" exitCode=0 Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.667096 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.675343 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xfxc7" event={"ID":"e75860c8-bd8b-434f-b2c6-91e7b7f60638","Type":"ContainerStarted","Data":"b874c75c22e94b838230660b6b2864b105daec1d2d85beb6a5155eb1d83d793b"} Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.688615 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.708831 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.726395 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.745869 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.771461 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.803967 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.829005 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.865627 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.877325 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.895481 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.912294 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.930508 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.945260 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.956500 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:52 crc kubenswrapper[4764]: I0127 07:16:52.972046 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.000104 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.039916 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.075920 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.098364 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.098490 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.098517 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.098541 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.098561 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098710 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098730 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:16:57.098675546 +0000 UTC m=+29.694298102 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098796 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:57.098777849 +0000 UTC m=+29.694400415 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098841 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098877 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098901 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098928 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098952 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:57.098940254 +0000 UTC m=+29.694562810 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.098863 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.099023 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.099053 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.099029 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:57.099002855 +0000 UTC m=+29.694625401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.099204 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:16:57.09916768 +0000 UTC m=+29.694790246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.117272 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.160100 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.198830 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.244229 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.289979 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.320352 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.365719 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.391085 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:49:22.198646489 +0000 UTC Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.408082 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.438222 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.438285 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.438373 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.438551 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.438222 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:53 crc kubenswrapper[4764]: E0127 07:16:53.438685 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.445390 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.487259 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.683470 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d" exitCode=0 Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.683622 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d"} Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.686118 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xfxc7" event={"ID":"e75860c8-bd8b-434f-b2c6-91e7b7f60638","Type":"ContainerStarted","Data":"1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19"} Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.692325 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.692395 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.714540 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.733720 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.749223 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.765977 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.788203 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.806015 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.822478 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.841830 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.861049 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.874190 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.920187 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:53 crc kubenswrapper[4764]: I0127 07:16:53.959462 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.000088 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:53Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.043177 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.090803 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.125229 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.159711 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.201830 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.241108 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.282374 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.325471 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.358212 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.391581 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:05:37.824947168 +0000 UTC Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.399554 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.439136 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.483413 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.525584 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.560084 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.599620 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.709069 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33" exitCode=0 Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.709333 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33"} Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.754521 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.778415 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.797609 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.817342 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.833134 4764 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.837212 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.837272 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.837293 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.837529 4764 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.840523 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.849850 4764 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.850217 4764 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.851707 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.851769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.851784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.851824 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.851839 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.866106 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.873512 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.873557 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.873571 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.873596 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.873613 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.882548 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.889050 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.893808 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.893860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.893869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.893895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.893938 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.909136 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.914853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.914903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.914917 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.914942 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.914958 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.921554 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.933358 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.937057 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.937087 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.937098 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.937116 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.937128 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.953397 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: E0127 07:16:54.953559 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955018 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955308 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955329 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955340 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955358 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.955367 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:54Z","lastTransitionTime":"2026-01-27T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:54 crc kubenswrapper[4764]: I0127 07:16:54.998716 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.039521 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.059465 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.059511 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.059524 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.059543 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.059557 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.079025 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.123840 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.156899 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.161853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.161901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.161915 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.161939 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.161954 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.196816 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.265349 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.265411 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.265431 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.265488 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.265507 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.368782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.368850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.368889 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.368915 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.368931 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.392621 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:52:43.371172433 +0000 UTC Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.438111 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.438111 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.438138 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:55 crc kubenswrapper[4764]: E0127 07:16:55.438286 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:55 crc kubenswrapper[4764]: E0127 07:16:55.438641 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:55 crc kubenswrapper[4764]: E0127 07:16:55.438766 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.472558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.472624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.472644 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.472671 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.472690 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.575728 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.575773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.575782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.575798 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.575811 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.661611 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.662836 4764 scope.go:117] "RemoveContainer" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" Jan 27 07:16:55 crc kubenswrapper[4764]: E0127 07:16:55.663125 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.679055 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.679135 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.679154 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.679184 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.679202 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.719476 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7" exitCode=0 Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.719605 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.727744 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.742935 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.765274 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.782606 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.782674 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.782688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.782714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.782743 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.788699 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.814094 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.831499 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.856089 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.872622 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.885783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.885833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.885846 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.885864 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.885877 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.888866 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.911030 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.927718 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.942033 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.956558 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.973948 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.989121 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.989185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.989229 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.989268 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.989286 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:55Z","lastTransitionTime":"2026-01-27T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:55 crc kubenswrapper[4764]: I0127 07:16:55.993494 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.091968 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.092344 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.092520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.092642 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.092746 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.196106 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.196154 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.196165 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.196184 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.196198 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.311836 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.311916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.311929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.311952 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.311971 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.393309 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:29:31.394285627 +0000 UTC Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.416625 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.416703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.416727 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.416764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.416790 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.520788 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.521152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.521165 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.521183 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.521196 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.623771 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.623886 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.623906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.623931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.623949 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.726623 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.726672 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.726683 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.726705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.726721 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.743514 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8be2cdf-f587-4704-9020-dcb7c8ced33d" containerID="7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115" exitCode=0 Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.743609 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerDied","Data":"7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.781885 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.808375 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.829892 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.829964 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.829983 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.830012 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.830032 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.830584 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.856347 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.878583 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.903120 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.922581 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.933275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.933320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.933336 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.933359 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.933374 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:56Z","lastTransitionTime":"2026-01-27T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.935273 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.948065 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.961823 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.975728 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:56 crc kubenswrapper[4764]: I0127 07:16:56.995297 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.010144 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.025719 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.036096 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.036133 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.036142 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.036155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.036166 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.140195 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.140248 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.140261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.140284 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.140299 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.194473 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.194740 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.194793 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.194747068 +0000 UTC m=+37.790369634 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.194936 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.194977 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.194991 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195062 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.195041186 +0000 UTC m=+37.790663712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195146 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195184 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195206 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195293 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.195267792 +0000 UTC m=+37.790890358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.194872 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.195633 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.195689 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195768 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195812 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.195802756 +0000 UTC m=+37.791425282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195832 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.195932 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.19591543 +0000 UTC m=+37.791537996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.244091 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.244155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.244177 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.244208 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.244231 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.348403 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.348503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.348525 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.348552 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.348571 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.394248 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:05:56.19336522 +0000 UTC Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.437817 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.437849 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.438037 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.438153 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.438478 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:57 crc kubenswrapper[4764]: E0127 07:16:57.438636 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.452746 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.452813 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.452825 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.452846 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.452859 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.556008 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.556069 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.556086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.556119 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.556137 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.659290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.659335 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.659353 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.659379 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.659397 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.756518 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.757226 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.763325 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.763381 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.763393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.763411 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.763423 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.765175 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" event={"ID":"b8be2cdf-f587-4704-9020-dcb7c8ced33d","Type":"ContainerStarted","Data":"a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.786043 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.794619 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.798424 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.813313 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.832136 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.849615 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.867483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.867528 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.867539 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.867555 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.867566 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.878234 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.904587 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.926603 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.957337 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.970102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.970149 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.970159 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.970177 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.970187 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:57Z","lastTransitionTime":"2026-01-27T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.973726 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:57 crc kubenswrapper[4764]: I0127 07:16:57.987209 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.001834 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.014463 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.027820 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.043367 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.056950 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.070694 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.072783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.072816 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.072825 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.072840 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.072852 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.088047 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.102791 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.120248 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.138165 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.152694 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.176541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.177121 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.177132 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.177157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.177172 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.186931 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.204175 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.214180 4764 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.231430 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g/status\": read tcp 38.102.83.73:52210->38.102.83.73:6443: use of closed network connection" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.249979 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.265468 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.280291 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.280614 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.280692 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.280765 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.280822 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.281482 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.383574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.383673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.383710 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.383742 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.383757 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.394812 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:23:50.601875481 +0000 UTC Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.458194 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.474299 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.486884 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.486930 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.486945 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.486963 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.486976 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.501396 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.520845 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.540289 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.559235 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.571198 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.584563 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.589729 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.589773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.589784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.589802 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.589813 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.607789 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.621637 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.639518 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.656363 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.674144 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.687413 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.693218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.693289 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.693305 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.693330 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.693349 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.769086 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.769714 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.795249 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.796222 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.796261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.796275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.796294 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.796307 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.815586 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.835151 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.848600 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.862855 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.875602 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.892316 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.899029 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.899059 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.899071 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.899096 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.899109 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:58Z","lastTransitionTime":"2026-01-27T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.908066 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.920429 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.935367 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.945135 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.960428 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.973272 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:58 crc kubenswrapper[4764]: I0127 07:16:58.985767 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:58.999938 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:16:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.001478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.001579 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.001665 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.001736 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.001804 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.104495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.104760 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.104833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.104940 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.105024 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.207645 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.208056 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.208187 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.208327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.208508 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.311742 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.312037 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.312114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.312205 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.312284 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.395521 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:29:45.852767182 +0000 UTC Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.414849 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.414930 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.414954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.414985 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.415055 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.437671 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.437734 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.437734 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:16:59 crc kubenswrapper[4764]: E0127 07:16:59.437917 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:16:59 crc kubenswrapper[4764]: E0127 07:16:59.438015 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:16:59 crc kubenswrapper[4764]: E0127 07:16:59.438140 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.517174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.517204 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.517213 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.517230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.517243 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.619857 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.619920 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.619941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.619968 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.619985 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.723320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.723652 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.723747 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.723856 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.723994 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.773572 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.826209 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.826279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.826289 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.826309 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.826319 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.928985 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.929030 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.929042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.929060 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:16:59 crc kubenswrapper[4764]: I0127 07:16:59.929073 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:16:59Z","lastTransitionTime":"2026-01-27T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.032063 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.032127 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.032145 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.032165 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.032177 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.135474 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.135541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.135557 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.135583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.135601 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.238887 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.238977 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.238996 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.239031 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.239054 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.342152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.342210 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.342228 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.342251 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.342266 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.396819 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:16:04.569831916 +0000 UTC Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.445328 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.445376 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.445389 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.445412 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.445426 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.548194 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.548235 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.548247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.548263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.548274 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.651590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.651714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.651736 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.651766 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.651786 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.755291 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.755341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.755354 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.755374 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.755388 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.780574 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/0.log" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.785022 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454" exitCode=1 Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.785072 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.786019 4764 scope.go:117] "RemoveContainer" containerID="953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.826897 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:00Z\\\",\\\"message\\\":\\\"lector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:17:00.411764 6065 factory.go:656] Stopping watch factory\\\\nI0127 07:17:00.411789 6065 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:17:00.411802 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 07:17:00.411817 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:17:00.411831 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:17:00.411873 6065 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412138 6065 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 07:17:00.412363 6065 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412495 6065 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.846531 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.858796 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.858927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.858946 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.858971 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.858988 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.871128 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.888019 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.911206 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.934496 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.957192 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.962390 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.962520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.962548 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.962580 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.962614 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:00Z","lastTransitionTime":"2026-01-27T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.982283 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:00 crc kubenswrapper[4764]: I0127 07:17:00.996987 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.016102 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.029020 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.046804 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.066167 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.066237 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.066250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.066274 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.066288 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.068674 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.083856 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.169671 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.169706 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.169715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.169733 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.169743 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.271922 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.272210 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.272274 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.272344 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.272412 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.374982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.375302 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.375368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.375456 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.375522 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.397475 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:29:28.549175961 +0000 UTC Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.438340 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.438340 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.438521 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:01 crc kubenswrapper[4764]: E0127 07:17:01.438981 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:01 crc kubenswrapper[4764]: E0127 07:17:01.439126 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:01 crc kubenswrapper[4764]: E0127 07:17:01.439331 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.478923 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.479286 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.479423 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.479555 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.479666 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.584125 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.584260 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.584279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.584310 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.584331 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.687972 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.688020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.688030 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.688046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.688060 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.790622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.790663 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.790679 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.790697 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.790711 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.791856 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/1.log" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.792874 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/0.log" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.798350 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" exitCode=1 Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.798422 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.798499 4764 scope.go:117] "RemoveContainer" containerID="953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.799850 4764 scope.go:117] "RemoveContainer" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" Jan 27 07:17:01 crc kubenswrapper[4764]: E0127 07:17:01.800148 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.835863 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:00Z\\\",\\\"message\\\":\\\"lector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:17:00.411764 6065 factory.go:656] Stopping watch factory\\\\nI0127 07:17:00.411789 6065 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:17:00.411802 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 07:17:00.411817 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:17:00.411831 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:17:00.411873 6065 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412138 6065 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 07:17:00.412363 6065 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412495 6065 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.856677 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.878433 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.895928 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.896021 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.896042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.896073 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.896094 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.901926 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.919842 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.940599 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.960703 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.977830 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.998772 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.999513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.999559 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.999574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.999595 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:01 crc kubenswrapper[4764]: I0127 07:17:01.999611 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:01Z","lastTransitionTime":"2026-01-27T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.020287 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.037670 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.057101 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.076689 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.096463 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.103018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.103155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.103255 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.103366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.103493 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.207853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.207933 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.207953 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.207984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.208006 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.311414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.311787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.311888 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.311984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.312064 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.353202 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9"] Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.353815 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.357056 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.360159 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.370693 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.387974 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.398035 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:02:27.172494075 +0000 UTC Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.405353 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.415617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.415684 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.415698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.415718 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.415733 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.419938 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.434979 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.449652 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.449714 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4wg\" (UniqueName: \"kubernetes.io/projected/15d6c16d-7028-4bfc-89ed-6a3e799f2759-kube-api-access-9f4wg\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.449796 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.449876 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.454727 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.472757 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.486164 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.501718 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.519406 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.519524 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.519544 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.519574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.519598 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.540192 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://953794892f376123e5bd7fb4a4f1c87bd423abaf6fc2c1c1cc8a488ed0f78454\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:00Z\\\",\\\"message\\\":\\\"lector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:17:00.411764 6065 factory.go:656] Stopping watch factory\\\\nI0127 07:17:00.411789 6065 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:17:00.411802 6065 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 07:17:00.411817 6065 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:17:00.411831 6065 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:17:00.411873 6065 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412138 6065 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 07:17:00.412363 6065 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:17:00.412495 6065 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.551289 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.551382 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4wg\" (UniqueName: \"kubernetes.io/projected/15d6c16d-7028-4bfc-89ed-6a3e799f2759-kube-api-access-9f4wg\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.551524 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.551601 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.552619 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.553017 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.559918 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15d6c16d-7028-4bfc-89ed-6a3e799f2759-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.564544 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.580764 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.583592 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4wg\" (UniqueName: \"kubernetes.io/projected/15d6c16d-7028-4bfc-89ed-6a3e799f2759-kube-api-access-9f4wg\") pod \"ovnkube-control-plane-749d76644c-clrx9\" (UID: \"15d6c16d-7028-4bfc-89ed-6a3e799f2759\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.601824 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.617558 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.622076 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.622153 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.622178 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.622211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.622231 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.636361 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.667928 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.726973 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.727034 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.727057 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.727086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.727110 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.805546 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/1.log" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.811422 4764 scope.go:117] "RemoveContainer" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" Jan 27 07:17:02 crc kubenswrapper[4764]: E0127 07:17:02.811663 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.812849 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" event={"ID":"15d6c16d-7028-4bfc-89ed-6a3e799f2759","Type":"ContainerStarted","Data":"75da9206740a5173a91e50f2e2d5cceaebabd3f6eda2c12fb50b693b63641c21"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.829062 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.830549 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.830613 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.830633 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.830659 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.830676 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.849337 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.864703 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.882253 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.900400 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.917120 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.933661 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.933700 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.933710 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.933729 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.933739 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:02Z","lastTransitionTime":"2026-01-27T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.939596 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.953928 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:02 crc kubenswrapper[4764]: I0127 07:17:02.970313 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.000209 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.017389 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.036821 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.036865 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.036877 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.036898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.036939 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.037380 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.050066 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.065013 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.080995 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.140385 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.140477 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.140494 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.140518 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.140534 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.243505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.243556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.243568 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.243590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.243602 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.346968 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.347018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.347030 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.347049 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.347064 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.398378 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:50:17.849810656 +0000 UTC Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.437892 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.437931 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.438078 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:03 crc kubenswrapper[4764]: E0127 07:17:03.438224 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:03 crc kubenswrapper[4764]: E0127 07:17:03.438374 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:03 crc kubenswrapper[4764]: E0127 07:17:03.438611 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.450916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.450966 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.450979 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.451003 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.451027 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.554342 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.554409 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.554425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.554481 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.554498 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.658189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.658268 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.658287 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.658318 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.658340 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.762704 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.762792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.762812 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.762843 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.762864 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.821586 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" event={"ID":"15d6c16d-7028-4bfc-89ed-6a3e799f2759","Type":"ContainerStarted","Data":"db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.821709 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" event={"ID":"15d6c16d-7028-4bfc-89ed-6a3e799f2759","Type":"ContainerStarted","Data":"e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.851136 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.865507 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.865590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.865605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.865628 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.865644 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.872333 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.903060 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.903269 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-crfqf"] Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.903819 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:03 crc kubenswrapper[4764]: E0127 07:17:03.903890 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.927719 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.952703 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.967460 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx6d\" (UniqueName: \"kubernetes.io/projected/6a5473d6-3349-44a0-8a36-4112062a89a6-kube-api-access-qxx6d\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.967531 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968313 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968372 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968391 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968415 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968430 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:03Z","lastTransitionTime":"2026-01-27T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.968913 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.982531 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:03 crc kubenswrapper[4764]: I0127 07:17:03.995664 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.014208 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.028856 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.045933 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.063927 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.068460 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.068583 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx6d\" (UniqueName: \"kubernetes.io/projected/6a5473d6-3349-44a0-8a36-4112062a89a6-kube-api-access-qxx6d\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:04 crc kubenswrapper[4764]: E0127 07:17:04.068655 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:04 crc kubenswrapper[4764]: E0127 07:17:04.068720 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:04.568700873 +0000 UTC m=+37.164323399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.071015 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.071049 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.071065 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.071086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.071099 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.084246 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.092342 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx6d\" (UniqueName: \"kubernetes.io/projected/6a5473d6-3349-44a0-8a36-4112062a89a6-kube-api-access-qxx6d\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.099380 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.112433 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.128108 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.144081 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.161765 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.173825 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.173870 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.173880 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.173898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.173909 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.179999 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.199555 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.219628 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.241637 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.256133 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.275290 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.276472 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.276525 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.276538 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.276561 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.276576 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.293960 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.307982 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.322137 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.332745 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.351902 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.366885 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.379525 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.379599 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.379618 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.379667 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.379686 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.398903 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:11:43.182448368 +0000 UTC Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.405207 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.484043 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.484115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.484132 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.484162 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.484182 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.574979 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:04 crc kubenswrapper[4764]: E0127 07:17:04.575378 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:04 crc kubenswrapper[4764]: E0127 07:17:04.575608 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:05.575572405 +0000 UTC m=+38.171194971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.587227 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.587276 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.587294 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.587323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.587343 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.691624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.691698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.691716 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.691751 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.691771 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.796021 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.796090 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.796109 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.796137 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.796160 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.899889 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.899969 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.899991 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.900021 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:04 crc kubenswrapper[4764]: I0127 07:17:04.900051 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:04Z","lastTransitionTime":"2026-01-27T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.004923 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.005296 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.005322 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.005360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.005384 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.109984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.110380 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.110410 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.110471 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.110494 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.197166 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.197276 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.197301 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.197336 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.197363 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.222433 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.229796 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.229895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.229920 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.229951 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.229978 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.250135 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.256363 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.256427 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.256470 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.256499 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.256516 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.275978 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.281500 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.281546 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.281559 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.281579 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.281594 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.285971 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.286146 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286206 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:17:21.286169545 +0000 UTC m=+53.881792101 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.286271 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286388 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.286473 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286544 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286545 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:21.286506754 +0000 UTC m=+53.882129440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286703 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286735 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286756 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286814 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.286807 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286838 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286902 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286817 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:21.286801472 +0000 UTC m=+53.882424038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286948 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:21.286935166 +0000 UTC m=+53.882557722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.286978 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:21.286965657 +0000 UTC m=+53.882588213 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.299984 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.305337 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.305537 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.305567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.305603 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.305628 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.321349 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.321557 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.323785 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.323858 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.323878 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.323903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.323923 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.399719 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:25:39.656630118 +0000 UTC Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.427484 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.427540 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.427556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.427576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.427591 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.437612 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.437628 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.437732 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.437765 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.437828 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.438004 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.438167 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.438370 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.531184 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.531274 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.531290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.531314 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.531329 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.590577 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.590819 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: E0127 07:17:05.590926 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:07.590898353 +0000 UTC m=+40.186520889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.635157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.635201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.635218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.635247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.635267 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.738771 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.738849 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.738875 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.738908 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.738934 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.841565 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.841671 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.841690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.841715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.841732 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.945410 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.945508 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.945527 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.945553 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:05 crc kubenswrapper[4764]: I0127 07:17:05.945571 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:05Z","lastTransitionTime":"2026-01-27T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.049404 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.049520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.049557 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.049592 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.049614 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.155402 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.155542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.155583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.155614 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.155638 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.258687 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.258773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.258795 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.258823 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.258844 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.362562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.362612 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.362624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.362642 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.362653 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.400568 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:45:42.028162862 +0000 UTC Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.465419 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.465535 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.465562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.465590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.465612 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.569140 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.569199 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.569216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.569245 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.569272 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.672823 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.672898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.672931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.672962 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.672983 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.777160 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.777232 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.777258 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.777287 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.777309 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.881113 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.881176 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.881196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.881223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.881244 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.983902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.983966 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.983984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.984010 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:06 crc kubenswrapper[4764]: I0127 07:17:06.984029 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:06Z","lastTransitionTime":"2026-01-27T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.087728 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.087804 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.087827 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.087858 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.087880 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.191408 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.191521 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.191547 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.191576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.191597 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.295617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.295675 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.295687 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.295708 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.295721 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.399496 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.399555 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.399568 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.399588 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.399601 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.401596 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:49:55.000416212 +0000 UTC Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.438365 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.438534 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.438363 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.438582 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.438573 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.439293 4764 scope.go:117] "RemoveContainer" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.439487 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.439584 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.439658 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.502686 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.502739 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.502762 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.502785 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.502801 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.606285 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.606331 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.606341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.606360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.606372 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.615315 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.615518 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:07 crc kubenswrapper[4764]: E0127 07:17:07.615583 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:11.615568185 +0000 UTC m=+44.211190711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.709125 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.709182 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.709196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.709217 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.709230 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.811922 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.811976 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.811989 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.812014 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.812029 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.842578 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.845722 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.846326 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.859874 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.873914 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.891636 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.907276 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.915350 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.915416 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.915433 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.915537 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.915558 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:07Z","lastTransitionTime":"2026-01-27T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.923296 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.939658 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.963665 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.974928 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:07 crc kubenswrapper[4764]: I0127 07:17:07.988508 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.004605 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.018417 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.018477 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.018488 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.018505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.018540 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.022897 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.037036 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.049974 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.062258 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.075250 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.086330 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.121246 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.121284 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.121294 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.121312 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.121326 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.224050 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.224093 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.224104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.224118 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.224132 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.284615 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.285583 4764 scope.go:117] "RemoveContainer" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" Jan 27 07:17:08 crc kubenswrapper[4764]: E0127 07:17:08.285748 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.327340 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.327471 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.327501 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.327530 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.327548 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.402511 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:59:18.619587281 +0000 UTC Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.430926 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.430983 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.430998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.431018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.431403 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.464693 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.482238 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.499257 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.519264 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.534885 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.534937 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.534954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.534981 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.534999 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.548401 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.566174 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.583033 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.600961 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.617471 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.636198 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.639703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.639761 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.639775 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.639797 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.639812 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.658190 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.672081 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.687380 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.700979 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.716931 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.728555 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.749943 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.749995 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.750005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.750024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.750037 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.853034 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.853095 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.853116 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.853146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.853170 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.956652 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.956713 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.956724 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.956744 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:08 crc kubenswrapper[4764]: I0127 07:17:08.956758 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:08Z","lastTransitionTime":"2026-01-27T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.060203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.060291 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.060314 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.060342 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.060364 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.164913 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.165005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.165024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.165054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.165080 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.269478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.269538 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.269556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.269581 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.269599 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.372991 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.373124 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.373151 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.373185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.373212 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.402727 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:09:29.349114743 +0000 UTC Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.437756 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.437829 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.437830 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.437961 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:09 crc kubenswrapper[4764]: E0127 07:17:09.437952 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:09 crc kubenswrapper[4764]: E0127 07:17:09.438158 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:09 crc kubenswrapper[4764]: E0127 07:17:09.438215 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:09 crc kubenswrapper[4764]: E0127 07:17:09.438324 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.476472 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.476537 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.476550 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.476569 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.476583 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.581523 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.581598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.581622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.581654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.581673 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.686380 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.686474 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.686494 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.686526 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.686544 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.791157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.791231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.791265 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.791294 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.791315 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.895178 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.895286 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.895300 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.895320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.895333 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.998891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.998940 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.998953 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.998973 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:09 crc kubenswrapper[4764]: I0127 07:17:09.998985 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:09Z","lastTransitionTime":"2026-01-27T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.103242 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.103323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.103340 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.103369 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.103388 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.206385 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.206479 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.206498 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.206523 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.206541 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.309719 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.309802 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.309829 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.309860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.309883 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.403549 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:36:27.450747727 +0000 UTC Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.413412 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.413505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.413572 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.413601 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.413620 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.517690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.517762 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.517787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.517821 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.517849 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.620574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.620642 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.620666 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.620697 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.620725 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.724031 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.724104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.724128 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.724157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.724181 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.827691 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.827776 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.827801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.827834 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.827859 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.931140 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.931215 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.931231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.931259 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:10 crc kubenswrapper[4764]: I0127 07:17:10.931349 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:10Z","lastTransitionTime":"2026-01-27T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.036139 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.036203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.036216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.036239 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.036269 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.140148 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.140225 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.140247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.140276 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.140297 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.243954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.244007 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.244023 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.244048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.244064 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.348102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.348176 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.348192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.348271 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.348290 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.403842 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:38:19.075587949 +0000 UTC Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.437922 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.438039 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.438056 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.437947 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.438240 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.438348 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.438551 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.438747 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.451479 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.451547 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.451568 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.451591 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.451609 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.555492 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.555583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.555596 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.555639 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.555657 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.660016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.660099 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.660116 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.660145 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.660167 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.664765 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.664985 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:11 crc kubenswrapper[4764]: E0127 07:17:11.665100 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:19.665067252 +0000 UTC m=+52.260689958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.764213 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.764517 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.764536 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.764602 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.764621 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.867792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.867908 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.867963 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.867996 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.868049 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.971712 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.971778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.971802 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.971833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:11 crc kubenswrapper[4764]: I0127 07:17:11.971858 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:11Z","lastTransitionTime":"2026-01-27T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.076148 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.076268 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.076288 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.076551 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.076577 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.180038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.180156 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.180176 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.180202 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.180221 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.283787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.283857 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.283877 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.283906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.283924 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.386481 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.386542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.386563 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.386591 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.386609 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.404140 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:58:26.825271817 +0000 UTC Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.490518 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.490578 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.490601 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.490629 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.490649 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.594424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.594548 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.594578 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.594618 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.594649 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.698142 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.698209 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.698236 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.698272 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.698301 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.802250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.802324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.802341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.802369 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.802386 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.905634 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.905715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.905737 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.905766 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:12 crc kubenswrapper[4764]: I0127 07:17:12.905784 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:12Z","lastTransitionTime":"2026-01-27T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.008582 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.008660 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.008678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.008705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.008762 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.112518 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.112596 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.112623 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.112654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.112673 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.216373 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.216424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.216478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.216512 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.216534 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.319780 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.319861 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.319883 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.319914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.319937 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.404658 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:52:56.46058207 +0000 UTC Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.423680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.423729 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.423743 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.423767 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.423787 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.438234 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.438333 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:13 crc kubenswrapper[4764]: E0127 07:17:13.438534 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.438570 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:13 crc kubenswrapper[4764]: E0127 07:17:13.438756 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.438978 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:13 crc kubenswrapper[4764]: E0127 07:17:13.439011 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:13 crc kubenswrapper[4764]: E0127 07:17:13.439621 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.526480 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.526560 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.526574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.526602 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.526618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.629107 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.629195 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.629209 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.629230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.629244 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.732418 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.732519 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.732532 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.732556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.732569 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.836134 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.836240 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.836276 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.836309 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.836337 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.940256 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.940325 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.940345 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.940375 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:13 crc kubenswrapper[4764]: I0127 07:17:13.940398 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:13Z","lastTransitionTime":"2026-01-27T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.043251 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.043307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.043317 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.043333 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.043347 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.146830 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.146891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.146905 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.146924 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.146939 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.250941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.251016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.251036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.251070 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.251091 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.354198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.354245 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.354260 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.354286 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.354302 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.405823 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:23:53.453583491 +0000 UTC Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.457278 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.457362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.457388 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.457425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.457489 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.560993 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.561055 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.561078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.561106 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.561126 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.664892 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.664965 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.664984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.665015 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.665036 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.768504 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.768571 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.768589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.768617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.768637 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.872780 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.872836 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.872853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.872880 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.872899 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.977134 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.977229 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.977253 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.977288 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:14 crc kubenswrapper[4764]: I0127 07:17:14.977315 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:14Z","lastTransitionTime":"2026-01-27T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.080949 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.081000 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.081010 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.081028 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.081041 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.184156 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.184254 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.184275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.184305 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.184326 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.287574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.287633 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.287645 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.287665 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.287675 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.390793 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.390839 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.390847 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.390865 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.390875 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.406675 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:01:40.92463126 +0000 UTC Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.437357 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.437539 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.437664 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.437792 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.437698 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.438009 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.438039 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.438133 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.493594 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.493668 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.493682 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.493703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.493718 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.596360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.596414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.596426 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.596465 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.596475 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.679673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.680091 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.680379 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.680691 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.680909 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.707504 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.714277 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.714577 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.714782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.714953 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.715103 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.742329 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.747982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.748068 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.748094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.748136 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.748179 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.772371 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.779035 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.779111 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.779126 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.779146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.779159 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.797642 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.802562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.802617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.802632 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.802654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.802670 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.817793 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:15 crc kubenswrapper[4764]: E0127 07:17:15.817915 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.819972 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.820028 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.820041 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.820062 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.820370 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.923863 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.923916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.923932 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.923960 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:15 crc kubenswrapper[4764]: I0127 07:17:15.923978 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:15Z","lastTransitionTime":"2026-01-27T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.027572 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.027653 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.027669 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.027693 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.027710 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.130861 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.130921 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.130939 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.130964 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.130983 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.234082 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.234151 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.234167 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.234192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.234210 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.336694 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.336763 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.336782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.336810 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.336852 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.407774 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:59:30.868460554 +0000 UTC Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.440393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.440492 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.440506 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.440528 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.440542 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.543415 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.543521 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.543540 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.543565 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.543585 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.682382 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.682757 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.682856 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.683018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.683124 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.781664 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.785763 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.785887 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.785907 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.785930 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.785955 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.792768 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.807896 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.834702 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.852271 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.870164 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.887338 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.889088 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.889189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.889240 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.889266 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.889285 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.905522 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.925974 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.944402 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.961616 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.973407 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:16Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.993317 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.993397 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.993418 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.993475 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:16 crc kubenswrapper[4764]: I0127 07:17:16.993510 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:16Z","lastTransitionTime":"2026-01-27T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.008977 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.025110 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.044091 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.067584 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.084743 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.096931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.097018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.097032 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.097054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.097069 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.106048 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:17Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.200948 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.201016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.201033 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.201057 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.201075 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.304768 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.304848 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.304873 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.304909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.304938 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408055 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408120 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408020 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:51:34.545946662 +0000 UTC Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.408322 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.437520 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.437562 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.437642 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:17 crc kubenswrapper[4764]: E0127 07:17:17.437788 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.437846 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:17 crc kubenswrapper[4764]: E0127 07:17:17.438014 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:17 crc kubenswrapper[4764]: E0127 07:17:17.438406 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:17 crc kubenswrapper[4764]: E0127 07:17:17.438260 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.511896 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.511992 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.512028 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.512061 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.512087 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.615635 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.615696 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.615714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.615736 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.615751 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.718978 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.719037 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.719054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.719080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.719101 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.823469 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.823536 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.823560 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.823589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.823608 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.927893 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.927993 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.928087 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.928130 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:17 crc kubenswrapper[4764]: I0127 07:17:17.928154 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:17Z","lastTransitionTime":"2026-01-27T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.030931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.030979 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.030997 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.031018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.031032 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.134108 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.134160 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.134172 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.134219 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.134233 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.237115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.237174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.237198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.237221 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.237236 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.340607 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.340663 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.340679 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.340698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.340711 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.408756 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:22:38.723113465 +0000 UTC Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.443690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.443784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.443814 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.443849 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.443909 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.458995 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.477570 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.502933 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.527547 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.547104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.547174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.547189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.547216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.547234 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.553675 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.579957 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.614912 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.632573 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.650495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.650792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.651014 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.651259 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.651392 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.651542 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.668172 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.692586 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.707893 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.721860 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.733644 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.744028 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.753994 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.754026 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.754038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.754053 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.754065 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.754648 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.765878 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:18Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.857039 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.857478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.857595 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.857704 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.857839 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.960284 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.960362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.960387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.960417 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:18 crc kubenswrapper[4764]: I0127 07:17:18.960478 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:18Z","lastTransitionTime":"2026-01-27T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.063673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.063727 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.063747 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.063774 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.063796 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.166428 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.166544 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.166564 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.166593 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.166614 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.269680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.269752 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.269773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.269801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.269818 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.373594 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.373655 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.373672 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.373693 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.373709 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.409611 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:13:44.178341104 +0000 UTC Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.438071 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.438229 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.438320 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.438412 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.438646 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.438426 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.439029 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.439093 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.439654 4764 scope.go:117] "RemoveContainer" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.477567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.478187 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.478261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.478330 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.478387 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.582850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.582887 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.582910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.582934 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.582949 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.671708 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.671948 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:19 crc kubenswrapper[4764]: E0127 07:17:19.672013 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:35.671989811 +0000 UTC m=+68.267612337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.686034 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.686086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.686097 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.686118 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.686133 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.783724 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.788853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.788899 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.788912 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.788931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.788950 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.800963 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.827894 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.846760 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.870631 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.885697 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.892869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.892902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.892910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.892926 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.892937 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.895306 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/1.log" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.898668 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.899173 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.904341 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.925215 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.938312 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.950464 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.961322 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.981744 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.995954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.996019 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.996038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.996067 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.996083 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:19Z","lastTransitionTime":"2026-01-27T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:19 crc kubenswrapper[4764]: I0127 07:17:19.996495 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:19Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.012283 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.030533 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.041509 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.055643 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.073216 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.088537 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.098392 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.098424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.098449 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.098465 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.098476 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.104338 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.115265 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.126395 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.145680 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.161175 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.176586 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.191467 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.200545 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.200598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.200609 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.200628 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.200644 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.205885 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.226239 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.240260 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.254746 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.267083 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.287300 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.304038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.304417 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.304542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.304622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.304696 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.308909 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.322955 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.338733 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.408001 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.408117 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.408144 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.408186 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.408216 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.410120 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:36:53.244340007 +0000 UTC Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.511068 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.511128 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.511139 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.511164 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.511179 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.613901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.613957 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.613974 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.614001 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.614022 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.716941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.716984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.716998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.717021 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.717034 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.820606 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.820662 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.820680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.820702 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.820717 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.906251 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/2.log" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.907596 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/1.log" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.912562 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" exitCode=1 Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.912659 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.912935 4764 scope.go:117] "RemoveContainer" containerID="26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.913917 4764 scope.go:117] "RemoveContainer" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" Jan 27 07:17:20 crc kubenswrapper[4764]: E0127 07:17:20.914240 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.923604 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.923638 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.923648 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.923664 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.923673 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:20Z","lastTransitionTime":"2026-01-27T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.952989 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ea1e670c6386f6f1965b6298244cc08ad13e809fc720a1df053925f3e5cb38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:01Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:01.629577 6200 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0127 07:17:01.629587 6200 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 07:17:01.629603 6200 services_controller.go:452] Built service openshift-network-diagnostics/network-check-target per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629619 6200 services_controller.go:444] Built service openshift-image-registry/image-registry LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 07:17:01.629623 6200 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0127 07:17:01.629620 6200 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:20 crc kubenswrapper[4764]: I0127 07:17:20.977784 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.001043 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:20Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.023030 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.026562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.026682 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.026744 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.026807 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.026882 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.042847 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.057476 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.074020 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.090569 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.106756 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.125558 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.129469 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.129519 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.129528 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.129547 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.129558 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.141426 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.162592 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.174071 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.192419 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.203847 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.215861 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.229228 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.232273 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.232311 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.232321 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.232338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.232351 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.291122 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.291299 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.291355 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.291377 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.291403 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291530 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:17:53.29150333 +0000 UTC m=+85.887125856 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291618 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291644 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291659 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291659 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291780 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291694 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291881 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291921 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.291720 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:53.291707616 +0000 UTC m=+85.887330142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.292021 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:53.291958043 +0000 UTC m=+85.887580639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.292360 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:53.29224764 +0000 UTC m=+85.887870226 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.292422 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:17:53.292401175 +0000 UTC m=+85.888023961 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.335196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.335247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.335260 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.335281 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.335293 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.410378 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:32:44.122833977 +0000 UTC Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.437349 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.437521 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.437350 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.437622 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.437808 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.437849 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.438047 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.438413 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.438688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.438757 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.438776 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.438800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.438817 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.542536 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.543418 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.543653 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.543814 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.543956 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.648211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.648665 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.648812 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.648941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.649061 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.752492 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.752539 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.752551 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.752567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.752580 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.856197 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.856250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.856265 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.856286 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.856301 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.920242 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/2.log" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.925491 4764 scope.go:117] "RemoveContainer" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" Jan 27 07:17:21 crc kubenswrapper[4764]: E0127 07:17:21.925911 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.939238 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.957131 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.958650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.958702 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.958712 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.958732 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.958745 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:21Z","lastTransitionTime":"2026-01-27T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.973924 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:21 crc kubenswrapper[4764]: I0127 07:17:21.996590 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.020525 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.047136 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.062188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.062275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.062299 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.062331 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.062359 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.070926 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.085997 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.105657 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.128729 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.141132 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.158782 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.164980 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.165039 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.165058 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.165086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.165104 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.172426 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.191166 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.204467 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.218239 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.240748 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.268385 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.268529 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.268546 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.268576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.268597 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.371863 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.371911 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.371923 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.371941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.371952 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.411388 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:53:20.423057815 +0000 UTC Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.474650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.474708 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.474719 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.474744 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.474757 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.577880 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.577935 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.577948 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.577968 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.577984 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.681185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.681740 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.681885 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.682005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.682087 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.785388 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.785455 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.785468 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.785486 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.785502 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.888024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.888067 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.888078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.888096 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.888107 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.991686 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.991764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.991783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.991811 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:22 crc kubenswrapper[4764]: I0127 07:17:22.991830 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:22Z","lastTransitionTime":"2026-01-27T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.094699 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.094738 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.094750 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.094766 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.094778 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.197395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.197461 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.197475 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.197493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.197505 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.300892 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.300964 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.300983 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.301009 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.301027 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.404694 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.404865 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.404918 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.404946 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.404971 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.412070 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:46:57.844175939 +0000 UTC Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.438109 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.438195 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.438249 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:23 crc kubenswrapper[4764]: E0127 07:17:23.438315 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.438127 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:23 crc kubenswrapper[4764]: E0127 07:17:23.438585 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:23 crc kubenswrapper[4764]: E0127 07:17:23.438771 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:23 crc kubenswrapper[4764]: E0127 07:17:23.438943 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.509175 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.509261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.509281 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.509312 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.509334 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.612821 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.612899 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.612909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.612930 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.612942 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.717001 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.717084 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.717102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.717131 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.717151 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.821027 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.821104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.821124 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.821156 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.821177 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.924510 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.924598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.924611 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.924634 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:23 crc kubenswrapper[4764]: I0127 07:17:23.924650 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:23Z","lastTransitionTime":"2026-01-27T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.027900 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.028012 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.028035 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.028062 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.028081 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.131147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.131203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.131220 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.131248 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.131267 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.234026 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.234086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.234105 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.234130 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.234148 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.337531 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.337603 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.337620 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.337647 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.337666 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.412290 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:42:40.408896279 +0000 UTC Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.442232 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.442298 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.442319 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.442376 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.442395 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.546046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.546114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.546138 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.546167 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.546193 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.649774 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.649849 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.649877 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.649913 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.649939 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.754129 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.754194 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.754212 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.754239 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.754259 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.857534 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.857586 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.857598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.857617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.857628 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.961227 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.961303 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.961329 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.961425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:24 crc kubenswrapper[4764]: I0127 07:17:24.961528 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:24Z","lastTransitionTime":"2026-01-27T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.064323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.064375 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.064388 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.064409 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.064423 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.167621 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.167708 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.167735 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.167769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.167795 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.271543 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.271627 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.271657 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.271735 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.271763 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.375750 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.375867 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.375880 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.375904 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.375916 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.413238 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:29:16.681450729 +0000 UTC Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.438083 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.438165 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.438243 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.438372 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:25 crc kubenswrapper[4764]: E0127 07:17:25.438372 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:25 crc kubenswrapper[4764]: E0127 07:17:25.438675 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:25 crc kubenswrapper[4764]: E0127 07:17:25.438816 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:25 crc kubenswrapper[4764]: E0127 07:17:25.438959 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.478862 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.478937 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.478961 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.478997 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.479023 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.582036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.582105 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.582126 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.582154 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.582177 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.684956 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.685011 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.685025 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.685047 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.685060 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.787812 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.787876 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.787894 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.787920 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.787941 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.890943 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.891028 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.891054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.891086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.891113 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.994778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.995218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.995362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.995579 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:25 crc kubenswrapper[4764]: I0127 07:17:25.995738 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:25Z","lastTransitionTime":"2026-01-27T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.012744 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.013005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.013203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.013373 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.013572 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.038366 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:26Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.044618 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.044716 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.044745 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.044780 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.044806 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.065378 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:26Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.071783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.071854 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.071878 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.071911 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.071932 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.091964 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:26Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.097394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.097585 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.097698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.097815 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.097911 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.114649 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:26Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.120513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.120554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.120565 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.120585 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.120599 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.137763 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:26Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:26 crc kubenswrapper[4764]: E0127 07:17:26.137898 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.140019 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.140204 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.140355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.140524 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.140618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.243753 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.244073 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.244141 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.244208 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.244274 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.347467 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.347586 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.347605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.347648 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.347670 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.413531 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:47:55.151944362 +0000 UTC Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.450706 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.450784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.450807 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.450833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.450855 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.554183 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.554262 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.554287 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.554318 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.554338 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.657513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.657590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.657616 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.657654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.657678 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.762229 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.762316 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.762336 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.762368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.762389 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.866626 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.867023 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.867239 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.867499 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.867717 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.971939 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.972005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.972021 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.972054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:26 crc kubenswrapper[4764]: I0127 07:17:26.972068 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:26Z","lastTransitionTime":"2026-01-27T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.075501 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.075591 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.075610 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.075637 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.075658 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.179402 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.179521 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.179541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.179571 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.179605 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.283428 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.283529 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.283542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.283565 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.283908 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.386916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.386959 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.386972 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.386995 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.387007 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.414326 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:20:03.307922402 +0000 UTC Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.437980 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.438005 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.438038 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:27 crc kubenswrapper[4764]: E0127 07:17:27.438203 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:27 crc kubenswrapper[4764]: E0127 07:17:27.438396 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.438419 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:27 crc kubenswrapper[4764]: E0127 07:17:27.438748 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:27 crc kubenswrapper[4764]: E0127 07:17:27.438861 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.490881 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.490961 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.490984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.491015 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.491044 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.595109 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.595187 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.595210 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.595245 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.595265 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.699324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.699408 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.699429 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.699506 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.699530 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.804087 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.804182 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.804214 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.804258 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.804289 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.908503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.908834 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.909003 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.909127 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:27 crc kubenswrapper[4764]: I0127 07:17:27.909227 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:27Z","lastTransitionTime":"2026-01-27T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.012326 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.012375 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.012390 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.012414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.012429 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.116488 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.116568 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.116588 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.116624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.116645 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.220044 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.220107 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.220122 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.220146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.220164 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.324306 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.324390 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.324414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.324496 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.324524 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.414680 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:51:30.517268002 +0000 UTC Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.427950 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.428031 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.428160 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.428190 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.428208 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.466882 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.486129 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.507500 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.525504 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.530046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.530114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.530132 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.530158 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.530180 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.560670 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.582945 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.600952 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.618542 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.633167 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.633241 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.633262 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.633295 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.633317 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.640557 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.664073 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.686681 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.706111 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.728205 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.736403 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.736503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.736530 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.736559 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.736581 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.748420 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.768838 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.815003 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.839705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.839753 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.839766 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.839784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.839797 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.848746 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:28Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.943126 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.943181 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.943192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.943213 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:28 crc kubenswrapper[4764]: I0127 07:17:28.943228 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:28Z","lastTransitionTime":"2026-01-27T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.046200 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.046269 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.046278 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.046296 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.046308 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.150054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.150113 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.150123 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.150141 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.150152 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.253287 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.253356 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.253366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.253387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.253402 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.357079 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.357149 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.357168 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.357199 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.357218 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.415825 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:38:17.463097298 +0000 UTC Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.438392 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.438433 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.438392 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.438605 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:29 crc kubenswrapper[4764]: E0127 07:17:29.438766 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:29 crc kubenswrapper[4764]: E0127 07:17:29.438886 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:29 crc kubenswrapper[4764]: E0127 07:17:29.438979 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:29 crc kubenswrapper[4764]: E0127 07:17:29.439265 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.460082 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.460152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.460170 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.460201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.460219 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.562723 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.562769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.562780 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.562798 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.562811 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.666307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.666379 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.666402 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.666431 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.666506 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.769792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.769882 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.769900 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.769927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.769994 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.873679 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.873761 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.873793 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.873817 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.873830 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.978152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.978214 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.978223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.978239 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:29 crc kubenswrapper[4764]: I0127 07:17:29.978249 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:29Z","lastTransitionTime":"2026-01-27T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.082615 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.082708 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.082748 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.082778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.082791 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.185944 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.185986 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.185997 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.186018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.186029 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.288865 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.288914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.288929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.288951 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.288967 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.392845 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.392903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.392920 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.392944 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.392962 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.416789 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:41:27.979655598 +0000 UTC Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.495672 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.495722 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.495734 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.495752 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.495762 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.598796 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.599168 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.599180 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.599197 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.599208 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.703333 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.703391 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.703408 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.703440 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.703504 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.806816 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.806914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.806939 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.807024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.807053 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.909821 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.910364 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.910576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.910775 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:30 crc kubenswrapper[4764]: I0127 07:17:30.910943 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:30Z","lastTransitionTime":"2026-01-27T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.015542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.015605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.015622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.015647 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.015664 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.119388 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.119482 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.119500 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.119534 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.119565 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.229237 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.229306 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.229324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.229349 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.229368 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.333523 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.333574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.333587 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.333611 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.333625 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.417778 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:46:02.608012157 +0000 UTC Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437365 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437383 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437407 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437425 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437553 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437491 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:31 crc kubenswrapper[4764]: E0127 07:17:31.437739 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:31 crc kubenswrapper[4764]: E0127 07:17:31.437816 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437830 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:31 crc kubenswrapper[4764]: E0127 07:17:31.437925 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.437405 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:31 crc kubenswrapper[4764]: E0127 07:17:31.438015 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.540978 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.541031 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.541048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.541074 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.541091 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.644377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.644465 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.644478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.644500 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.644512 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.748200 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.748251 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.748267 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.748292 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.748310 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.851816 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.851879 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.851895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.851918 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.851933 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.955307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.955363 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.955384 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.955403 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:31 crc kubenswrapper[4764]: I0127 07:17:31.955418 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:31Z","lastTransitionTime":"2026-01-27T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.057938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.058260 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.058335 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.058400 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.058505 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.160934 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.161279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.161363 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.161520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.161601 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.264973 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.265044 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.265063 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.265094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.265116 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.368169 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.368277 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.368316 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.368355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.368379 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.418226 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:44:54.55818473 +0000 UTC Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.472042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.472097 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.472116 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.472147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.472166 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.575543 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.575595 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.575649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.575691 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.575710 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.678989 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.679079 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.679118 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.679174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.679206 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.783246 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.783302 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.783318 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.783344 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.783363 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.886910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.886969 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.886982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.887002 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.887014 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.989389 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.989428 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.989458 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.989481 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:32 crc kubenswrapper[4764]: I0127 07:17:32.989498 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:32Z","lastTransitionTime":"2026-01-27T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.093086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.093152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.093171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.093204 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.093223 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.196489 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.196548 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.196556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.196573 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.196584 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.299688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.299739 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.299777 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.299800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.299814 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.402840 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.402874 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.402884 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.402901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.402913 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.418384 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:37:04.269448355 +0000 UTC Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.438143 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.438171 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:33 crc kubenswrapper[4764]: E0127 07:17:33.438342 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.438381 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.438170 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:33 crc kubenswrapper[4764]: E0127 07:17:33.438746 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:33 crc kubenswrapper[4764]: E0127 07:17:33.438896 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:33 crc kubenswrapper[4764]: E0127 07:17:33.439033 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.439235 4764 scope.go:117] "RemoveContainer" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" Jan 27 07:17:33 crc kubenswrapper[4764]: E0127 07:17:33.439606 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.506124 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.506190 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.506204 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.506227 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.506243 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.609592 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.609648 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.609660 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.609678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.609689 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.713505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.713550 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.713562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.713584 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.713599 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.817245 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.817301 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.817315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.817338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.817353 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.920506 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.920557 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.920567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.920586 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:33 crc kubenswrapper[4764]: I0127 07:17:33.920597 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:33Z","lastTransitionTime":"2026-01-27T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.023473 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.023530 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.023543 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.023565 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.023581 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.127016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.127075 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.127085 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.127104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.127118 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.230171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.230220 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.230229 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.230249 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.230261 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.332922 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.332974 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.332984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.333037 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.333052 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.418533 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:10:31.530151286 +0000 UTC Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.436657 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.436850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.436954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.437053 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.437125 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.541117 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.541176 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.541189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.541216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.541231 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.644465 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.644546 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.644569 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.644598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.644618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.748100 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.748160 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.748173 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.748196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.748211 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.851685 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.851801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.851838 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.851873 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.851900 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.955144 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.955202 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.955215 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.955234 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:34 crc kubenswrapper[4764]: I0127 07:17:34.955256 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:34Z","lastTransitionTime":"2026-01-27T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.058590 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.058905 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.058980 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.059046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.059119 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.161789 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.161861 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.161871 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.161891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.161901 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.264120 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.264172 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.264192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.264216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.264234 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.367086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.367164 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.367185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.367230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.367253 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.418994 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:04:02.983530378 +0000 UTC Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.437291 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.437426 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.437576 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.437676 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.437772 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.437787 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.437840 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.437879 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.470810 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.470888 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.470938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.470972 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.470989 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.574591 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.574650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.574659 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.574679 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.574691 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.681718 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.681769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.681779 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.681801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.681812 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.685295 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.685508 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:35 crc kubenswrapper[4764]: E0127 07:17:35.685694 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:07.685671834 +0000 UTC m=+100.281294360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.784903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.785347 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.785551 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.785718 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.785863 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.889020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.889074 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.889088 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.889109 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.889124 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.991165 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.991201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.991211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.991227 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:35 crc kubenswrapper[4764]: I0127 07:17:35.991239 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:35Z","lastTransitionTime":"2026-01-27T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.095554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.095617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.095628 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.095650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.095664 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.198605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.198642 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.198653 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.198671 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.198684 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.301897 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.302393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.302614 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.302768 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.302905 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.321898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.321948 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.321959 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.321977 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.321988 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.334749 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:36Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.340801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.340856 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.340947 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.340988 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.341006 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.356876 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:36Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.361212 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.361256 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.361271 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.361293 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.361310 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.380354 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:36Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.385802 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.385835 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.385848 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.385866 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.385881 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.407315 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:36Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.414078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.414153 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.414174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.414198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.414218 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.419557 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:15:01.994212631 +0000 UTC Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.434908 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:36Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:36 crc kubenswrapper[4764]: E0127 07:17:36.435086 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.439361 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.439409 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.439423 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.439466 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.439484 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.542916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.543022 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.543078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.543158 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.543233 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.646592 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.646630 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.646639 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.646655 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.646666 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.749105 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.749422 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.749507 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.749570 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.749625 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.852777 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.852872 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.852898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.852927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.852946 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.956150 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.957330 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.957573 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.957939 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:36 crc kubenswrapper[4764]: I0127 07:17:36.958132 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:36Z","lastTransitionTime":"2026-01-27T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.061625 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.061701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.061723 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.061754 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.061779 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.165026 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.165083 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.165092 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.165112 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.165121 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.267243 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.267319 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.267338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.267364 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.267381 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.370719 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.371119 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.371224 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.371331 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.371426 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.420478 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:38:18.180903974 +0000 UTC Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.438181 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.438231 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.438271 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.438364 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:37 crc kubenswrapper[4764]: E0127 07:17:37.438534 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:37 crc kubenswrapper[4764]: E0127 07:17:37.438658 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:37 crc kubenswrapper[4764]: E0127 07:17:37.438877 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:37 crc kubenswrapper[4764]: E0127 07:17:37.438965 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.478392 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.479833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.479884 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.479929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.479944 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.582486 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.582541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.582554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.582573 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.582587 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.686688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.686761 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.686783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.686834 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.686857 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.789995 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.790046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.790060 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.790078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.790091 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.893400 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.893478 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.893497 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.893514 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.893528 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.987785 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/0.log" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.987867 4764 generic.go:334] "Generic (PLEG): container finished" podID="e936b8fc-81d9-4222-a66f-742b2db87386" containerID="1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999" exitCode=1 Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.987916 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerDied","Data":"1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999"} Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.988534 4764 scope.go:117] "RemoveContainer" containerID="1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.995320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.995369 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.995378 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.995394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:37 crc kubenswrapper[4764]: I0127 07:17:37.995405 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:37Z","lastTransitionTime":"2026-01-27T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.008256 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.020841 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.036378 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.051010 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.072569 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.086587 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.098024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.098080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.098094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.098114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.098129 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.102695 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.118860 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.135973 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.150871 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.163838 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.175935 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.188694 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.201826 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.201902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.201917 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.201938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.201951 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.205941 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.221748 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.237659 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.251706 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.304842 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.304895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.304906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.304927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.304940 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.407935 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.408004 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.408016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.408036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.408049 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.421529 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:16:51.328521424 +0000 UTC Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.452497 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.460818 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.475043 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.494987 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.511554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.511618 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.511632 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.511687 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.511703 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.512236 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.533857 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.547489 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.565131 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.583056 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.594517 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.608363 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.614048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.614091 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.614102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.614120 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.614133 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.626530 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.643688 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.658662 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.672553 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.689259 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.701406 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.717483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.717528 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.717541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.717562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.717579 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.723911 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:38Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.819819 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.819874 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.819885 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.819904 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.819920 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.923902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.924377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.924395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.924416 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.924431 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:38Z","lastTransitionTime":"2026-01-27T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.994563 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/0.log" Jan 27 07:17:38 crc kubenswrapper[4764]: I0127 07:17:38.994701 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerStarted","Data":"edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.007824 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.019962 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.026600 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.026793 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.026925 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.027052 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.027180 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.035879 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.052703 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.068146 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.090046 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.106983 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.119426 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.129726 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.129753 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.129764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.129781 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.129793 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.133728 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.146170 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.164213 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.180569 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.192478 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.209308 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.223197 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.231487 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.231568 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.231589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.231607 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.231622 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.232726 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.250476 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.265729 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:39Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.334045 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.334092 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.334102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.334119 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.334128 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.422147 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:35:57.20664536 +0000 UTC Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440057 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440149 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440170 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440231 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440277 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440326 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440425 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.440553 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:39 crc kubenswrapper[4764]: E0127 07:17:39.440547 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:39 crc kubenswrapper[4764]: E0127 07:17:39.440729 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:39 crc kubenswrapper[4764]: E0127 07:17:39.440877 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:39 crc kubenswrapper[4764]: E0127 07:17:39.441207 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.543767 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.544143 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.544249 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.544356 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.544491 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.647419 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.647503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.647514 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.647533 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.647545 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.750430 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.750503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.750513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.750532 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.750544 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.853729 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.853778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.853789 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.853808 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.853823 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.956124 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.956182 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.956193 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.956210 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:39 crc kubenswrapper[4764]: I0127 07:17:39.956223 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:39Z","lastTransitionTime":"2026-01-27T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.058839 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.058881 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.058891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.058908 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.058922 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.161998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.162042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.162050 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.162067 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.162078 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.265675 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.265725 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.265736 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.265757 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.265770 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.368421 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.368498 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.368511 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.368531 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.368577 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.423364 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:47:44.686789188 +0000 UTC Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.472612 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.472676 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.472694 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.472717 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.472729 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.575142 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.575198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.575211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.575232 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.575245 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.677818 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.677895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.677914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.677946 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.677965 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.781482 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.781566 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.781580 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.781602 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.781618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.884680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.884741 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.884754 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.884777 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.884789 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.987710 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.987768 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.987789 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.987822 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:40 crc kubenswrapper[4764]: I0127 07:17:40.987847 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:40Z","lastTransitionTime":"2026-01-27T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.091076 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.091133 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.091150 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.091174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.091192 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.194714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.194771 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.194787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.194810 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.194825 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.298723 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.298792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.298819 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.298853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.298875 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.402495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.402595 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.402616 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.402649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.402670 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.424218 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:54:59.967519327 +0000 UTC Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.437751 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.437806 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.437835 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.437758 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:41 crc kubenswrapper[4764]: E0127 07:17:41.438006 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:41 crc kubenswrapper[4764]: E0127 07:17:41.438232 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:41 crc kubenswrapper[4764]: E0127 07:17:41.438382 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:41 crc kubenswrapper[4764]: E0127 07:17:41.438547 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.506077 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.506169 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.506196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.506233 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.506256 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.609654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.609737 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.609752 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.609773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.609795 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.713795 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.713869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.713890 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.713919 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.713938 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.817945 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.818017 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.818036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.818064 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.818085 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.921283 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.921363 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.921433 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.921502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:41 crc kubenswrapper[4764]: I0127 07:17:41.921524 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:41Z","lastTransitionTime":"2026-01-27T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.025587 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.025662 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.025680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.025706 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.025726 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.128554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.128632 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.128649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.128676 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.128693 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.232263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.232341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.232361 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.232395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.232419 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.336219 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.336268 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.336283 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.336307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.336326 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.424983 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:48:51.750803448 +0000 UTC Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.440270 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.440346 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.440357 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.440374 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.440384 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.543320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.543420 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.543487 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.543522 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.543545 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.647359 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.647417 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.647426 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.647462 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.647474 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.750787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.750855 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.750870 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.750889 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.750906 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.855137 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.855189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.855203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.855225 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.855240 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.959327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.959389 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.959407 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.959433 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:42 crc kubenswrapper[4764]: I0127 07:17:42.959484 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:42Z","lastTransitionTime":"2026-01-27T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.063525 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.063597 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.063617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.063646 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.063665 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.167305 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.167375 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.167396 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.167425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.167483 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.271018 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.271100 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.271318 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.271356 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.271377 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.375425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.375534 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.375558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.375588 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.375614 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.426082 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:26:04.167767106 +0000 UTC Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.437593 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.437709 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.437734 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:43 crc kubenswrapper[4764]: E0127 07:17:43.437768 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.437828 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:43 crc kubenswrapper[4764]: E0127 07:17:43.437857 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:43 crc kubenswrapper[4764]: E0127 07:17:43.438034 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:43 crc kubenswrapper[4764]: E0127 07:17:43.438124 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.479752 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.479804 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.479823 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.479853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.479873 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.582327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.582402 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.582420 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.582479 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.582500 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.685360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.685409 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.685423 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.685487 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.685544 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.789258 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.789314 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.789329 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.789356 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.789371 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.893060 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.893144 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.893168 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.893203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.893227 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.996036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.996096 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.996119 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.996147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:43 crc kubenswrapper[4764]: I0127 07:17:43.996166 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:43Z","lastTransitionTime":"2026-01-27T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.099002 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.099054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.099064 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.099080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.099094 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.201727 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.201799 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.201823 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.201853 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.201875 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.304904 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.304945 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.304953 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.304970 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.304980 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.407589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.407648 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.407661 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.407683 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.407697 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.427054 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:55:54.141778079 +0000 UTC Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.511496 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.511549 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.511562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.511583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.511600 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.615233 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.615304 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.615370 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.615406 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.615462 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.719122 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.719198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.719224 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.719254 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.719277 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.823074 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.823130 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.823147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.823173 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.823192 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.926347 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.927023 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.927123 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.927254 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:44 crc kubenswrapper[4764]: I0127 07:17:44.927368 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:44Z","lastTransitionTime":"2026-01-27T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.030140 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.030180 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.030188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.030203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.030213 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.133758 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.133837 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.133862 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.133905 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.133926 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.237628 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.237694 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.237712 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.237739 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.237758 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.340887 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.340957 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.340976 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.341004 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.341025 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.428150 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:34:41.532508258 +0000 UTC Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.437863 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.437863 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.437885 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.437885 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:45 crc kubenswrapper[4764]: E0127 07:17:45.438263 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:45 crc kubenswrapper[4764]: E0127 07:17:45.438600 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:45 crc kubenswrapper[4764]: E0127 07:17:45.438669 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:45 crc kubenswrapper[4764]: E0127 07:17:45.438737 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.444635 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.444703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.444723 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.444755 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.444775 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.547479 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.547520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.547530 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.547548 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.547558 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.649860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.649894 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.649902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.649918 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.649927 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.752540 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.752592 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.752603 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.752623 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.752640 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.855191 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.855231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.855240 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.855257 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.855269 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.957996 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.958056 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.958072 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.958096 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:45 crc kubenswrapper[4764]: I0127 07:17:45.958112 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:45Z","lastTransitionTime":"2026-01-27T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.060231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.060299 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.060313 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.060339 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.060353 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.163279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.163323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.163331 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.163349 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.163360 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.266285 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.266351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.266368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.266469 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.266491 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.369230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.369291 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.369307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.369327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.369341 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.429252 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:39:31.134755959 +0000 UTC Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.439760 4764 scope.go:117] "RemoveContainer" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.474435 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.474545 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.474564 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.474593 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.474618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.578267 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.578331 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.578350 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.578377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.578395 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.688938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.688998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.689015 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.689039 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.689057 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.738196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.738248 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.738261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.738281 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.738294 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.750767 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.755461 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.755517 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.755534 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.755553 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.755564 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.769714 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.774394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.774458 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.774471 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.774488 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.774501 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.789537 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.795284 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.795316 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.795327 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.795341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.795352 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.811882 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.816270 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.816329 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.816343 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.816366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.816391 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.830627 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:46 crc kubenswrapper[4764]: E0127 07:17:46.830791 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.833188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.833225 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.833235 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.833252 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.833264 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.936143 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.936203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.936223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.936250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:46 crc kubenswrapper[4764]: I0127 07:17:46.936268 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:46Z","lastTransitionTime":"2026-01-27T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.027742 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/2.log" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.031746 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.032256 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.039984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.040290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.040381 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.040502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.040613 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.055844 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.076971 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.096421 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.116728 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.137493 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.142761 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.142783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.142792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.142824 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.142837 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.159276 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.178620 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.197876 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.216477 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.233917 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.245841 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.245901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.245915 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.245938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.245952 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.247386 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.259138 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.272802 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.287237 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.320324 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.333233 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.348103 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.348153 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.348166 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.348186 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.348201 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.353093 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.376339 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.429367 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:17:25.044429409 +0000 UTC Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.437588 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.437695 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:47 crc kubenswrapper[4764]: E0127 07:17:47.438504 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.438605 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.438695 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:47 crc kubenswrapper[4764]: E0127 07:17:47.438831 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:47 crc kubenswrapper[4764]: E0127 07:17:47.438874 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:47 crc kubenswrapper[4764]: E0127 07:17:47.438965 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.450462 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.450493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.450503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.450520 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.450532 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.554351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.554466 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.554490 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.554522 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.554545 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.657671 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.657745 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.657764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.657792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.657813 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.762069 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.762503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.762596 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.762663 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.762730 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.866381 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.866502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.866522 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.866549 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.866574 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.970678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.971149 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.971426 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.971784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:47 crc kubenswrapper[4764]: I0127 07:17:47.972073 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:47Z","lastTransitionTime":"2026-01-27T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.039493 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/3.log" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.040552 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/2.log" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.045507 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" exitCode=1 Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.045797 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.046033 4764 scope.go:117] "RemoveContainer" containerID="f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.046543 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:17:48 crc kubenswrapper[4764]: E0127 07:17:48.046802 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075460 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075830 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075888 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075934 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.075989 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.097494 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.117368 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.135328 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.151104 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.172531 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.178556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.178601 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.178612 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.178633 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.178652 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.190262 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.213119 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.232108 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.247092 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.274164 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:47Z\\\",\\\"message\\\":\\\"p\\\\nI0127 07:17:47.652915 6819 factory.go:656] Stopping watch factory\\\\nI0127 07:17:47.652928 6819 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 07:17:47.652835 6819 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652937 6819 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:47.652938 6819 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652950 6819 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-lh5rf in node crc\\\\nI0127 07:17:47.652955 6819 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf after 0 failed attempt(s)\\\\nI0127 07:17:47.652959 6819 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652976 6819 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 216.486µs)\\\\nI0127 07:17:47.653008 6819 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:17:47.653031 6819 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 07:17:47.653131 6819 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.282089 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.282346 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.282549 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.282761 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.282943 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.290404 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.307618 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.323568 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.338104 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.359791 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.377792 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.386220 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.386509 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.386673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.386827 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.386970 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.397499 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.429872 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:39:48.475242241 +0000 UTC Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.473379 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b897a732271eaf22b2ec81a747a273db6afd14fe335773ce1d041235967589\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:20Z\\\",\\\"message\\\":\\\"operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 07:17:20.303197 6426 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:20.303207 6426 services_controller.go:452] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303220 6426 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 07:17:20.303227 6426 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:17:20.303245 6426 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:17:20.303280 6426 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:47Z\\\",\\\"message\\\":\\\"p\\\\nI0127 07:17:47.652915 6819 factory.go:656] Stopping watch factory\\\\nI0127 07:17:47.652928 6819 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 07:17:47.652835 6819 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652937 6819 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:47.652938 6819 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652950 6819 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-lh5rf in node crc\\\\nI0127 07:17:47.652955 6819 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf after 0 failed attempt(s)\\\\nI0127 07:17:47.652959 6819 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652976 6819 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 216.486µs)\\\\nI0127 07:17:47.653008 6819 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:17:47.653031 6819 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 07:17:47.653131 6819 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.489575 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.494185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.494263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.494279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.494303 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.494326 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.513644 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.530432 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.551755 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.569456 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.589012 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.598782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.598818 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.598827 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.598843 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.598856 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.607718 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.625550 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.642250 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.660052 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.679221 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.692236 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.701698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.701745 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.701762 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.701787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.701802 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.707831 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.725217 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.743040 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.760845 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.776259 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.804424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.804541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.804555 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.804578 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.804593 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.907494 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.907558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.907577 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.907603 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:48 crc kubenswrapper[4764]: I0127 07:17:48.907620 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:48Z","lastTransitionTime":"2026-01-27T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.010189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.010296 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.010320 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.010346 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.010365 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.051180 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/3.log" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.055756 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:17:49 crc kubenswrapper[4764]: E0127 07:17:49.056132 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.068171 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.080260 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.111500 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.116504 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.116542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.116556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.116576 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.116592 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.131983 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.162090 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.174547 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.192371 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.206931 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.219836 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.219963 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.220048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.220122 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.220191 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.221139 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.236468 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.271252 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:47Z\\\",\\\"message\\\":\\\"p\\\\nI0127 07:17:47.652915 6819 factory.go:656] Stopping watch factory\\\\nI0127 07:17:47.652928 6819 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 07:17:47.652835 6819 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652937 6819 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:47.652938 6819 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652950 6819 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-lh5rf in node crc\\\\nI0127 07:17:47.652955 6819 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf after 0 failed attempt(s)\\\\nI0127 07:17:47.652959 6819 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652976 6819 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 216.486µs)\\\\nI0127 07:17:47.653008 6819 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:17:47.653031 6819 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 07:17:47.653131 6819 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.286064 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.298649 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.310428 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.320773 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.322344 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.322470 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.322544 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.322649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.322725 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.336299 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.348502 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.360234 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:49Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.425967 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.426016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.426029 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.426048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.426062 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.430872 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:45:45.048906084 +0000 UTC Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.438309 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:49 crc kubenswrapper[4764]: E0127 07:17:49.438522 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.438641 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.438656 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:49 crc kubenswrapper[4764]: E0127 07:17:49.438742 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.438811 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:49 crc kubenswrapper[4764]: E0127 07:17:49.438869 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:49 crc kubenswrapper[4764]: E0127 07:17:49.438931 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.529288 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.529362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.529386 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.529414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.529473 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.631983 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.632042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.632053 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.632071 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.632082 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.734866 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.734924 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.734936 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.734957 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.734971 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.838574 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.838650 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.838669 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.838697 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.838722 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.942108 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.942171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.942188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.942211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:49 crc kubenswrapper[4764]: I0127 07:17:49.942229 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:49Z","lastTransitionTime":"2026-01-27T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.046989 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.047044 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.047058 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.047078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.047093 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.150784 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.150847 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.150860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.150884 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.150900 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.254790 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.254921 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.254982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.255013 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.255074 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.359142 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.359201 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.359218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.359244 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.359264 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.432004 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:10:12.460748635 +0000 UTC Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.462394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.462472 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.462483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.462501 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.462512 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.565718 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.565751 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.565762 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.565778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.565789 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.668929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.668974 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.668985 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.669002 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.669015 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.772567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.772673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.772698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.772734 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.772761 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.875705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.875754 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.875764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.875782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.875793 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.979789 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.979850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.979867 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.979896 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:50 crc kubenswrapper[4764]: I0127 07:17:50.979914 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:50Z","lastTransitionTime":"2026-01-27T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.082961 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.083017 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.083034 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.083059 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.083079 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.185972 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.186366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.186485 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.186580 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.186663 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.291006 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.291109 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.291133 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.291169 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.291197 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.394501 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.394557 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.394599 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.394621 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.394639 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.432919 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:08:03.284305979 +0000 UTC Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.438215 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.438259 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:51 crc kubenswrapper[4764]: E0127 07:17:51.438413 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.438501 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.438536 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:51 crc kubenswrapper[4764]: E0127 07:17:51.438664 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:51 crc kubenswrapper[4764]: E0127 07:17:51.438724 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:51 crc kubenswrapper[4764]: E0127 07:17:51.438807 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.497702 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.497756 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.497773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.497792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.497807 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.601294 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.601342 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.601350 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.601366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.601377 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.705144 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.705308 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.705372 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.705458 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.705534 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.809356 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.809920 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.809949 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.810004 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.810032 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.914001 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.914072 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.914098 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.914132 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:51 crc kubenswrapper[4764]: I0127 07:17:51.914157 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:51Z","lastTransitionTime":"2026-01-27T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.018910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.018958 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.018977 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.019002 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.019019 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.123430 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.123941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.124300 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.124773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.125145 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.227969 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.228348 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.228470 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.228588 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.228708 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.331529 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.331617 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.331636 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.331663 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.331681 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.433292 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:37:38.411599329 +0000 UTC Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.434832 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.434870 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.434886 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.434907 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.434920 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.538902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.538979 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.538997 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.539024 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.539044 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.642820 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.642891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.642909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.642937 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.642957 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.746326 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.746824 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.747087 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.747337 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.747730 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.852231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.852338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.852362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.852391 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.852414 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.955953 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.956527 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.956734 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.956888 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:52 crc kubenswrapper[4764]: I0127 07:17:52.957017 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:52Z","lastTransitionTime":"2026-01-27T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.060635 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.060706 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.060725 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.060758 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.060779 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.164589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.164687 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.164712 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.164741 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.164761 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.269075 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.269134 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.269147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.269171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.269186 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.315066 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315311 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.315267849 +0000 UTC m=+149.910890375 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.315410 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.315533 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.315609 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.315675 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315763 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315784 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315803 4764 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315826 4764 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315831 4764 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315863 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.315854555 +0000 UTC m=+149.911477081 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315931 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.315901426 +0000 UTC m=+149.911524182 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315983 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.315965588 +0000 UTC m=+149.911588364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.315844 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.316483 4764 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.316518 4764 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.316613 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.316594145 +0000 UTC m=+149.912216711 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.372884 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.372951 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.372965 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.372982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.372995 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.434134 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:43:55.632808032 +0000 UTC Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.437681 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.437709 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.437931 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.438029 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.438311 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.438571 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.439057 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:53 crc kubenswrapper[4764]: E0127 07:17:53.439180 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.460793 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.477202 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.477265 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.477282 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.477310 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.477328 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.580696 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.580779 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.580802 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.580826 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.580846 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.683673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.683714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.683723 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.683738 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.683748 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.787155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.787214 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.787227 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.787250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.787267 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.890611 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.890662 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.890677 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.890700 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.890713 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.994589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.994667 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.994686 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.994717 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:53 crc kubenswrapper[4764]: I0127 07:17:53.994736 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:53Z","lastTransitionTime":"2026-01-27T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.098323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.098397 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.098418 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.098476 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.098498 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.202166 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.202234 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.202247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.202273 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.202289 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.306027 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.306467 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.306611 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.306683 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.306704 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.409662 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.409704 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.409715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.409734 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.409747 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.434492 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:24:52.491049928 +0000 UTC Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.514200 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.514706 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.514850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.515005 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.515124 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.618844 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.618906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.618924 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.618955 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.618977 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.722321 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.722405 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.722425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.722531 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.722556 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.825913 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.826420 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.826562 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.826666 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.826777 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.930275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.930607 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.930722 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.930795 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:54 crc kubenswrapper[4764]: I0127 07:17:54.930853 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:54Z","lastTransitionTime":"2026-01-27T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.034510 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.034566 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.034581 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.034600 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.034618 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.137846 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.137896 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.137909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.137928 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.137945 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.240792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.240870 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.240888 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.240914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.240933 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.343892 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.343941 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.343949 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.343967 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.343979 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.434713 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 12:21:27.872169115 +0000 UTC Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.438356 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.438481 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.438654 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.438655 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:55 crc kubenswrapper[4764]: E0127 07:17:55.438585 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:55 crc kubenswrapper[4764]: E0127 07:17:55.438976 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:55 crc kubenswrapper[4764]: E0127 07:17:55.439039 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:55 crc kubenswrapper[4764]: E0127 07:17:55.439136 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.446762 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.446840 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.446862 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.446890 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.446909 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.550347 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.550394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.550407 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.550426 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.550468 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.653914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.653981 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.653998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.654023 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.654047 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.756328 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.756374 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.756387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.756405 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.756419 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.858976 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.859253 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.859319 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.859538 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.859619 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.963006 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.963080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.963103 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.963129 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:55 crc kubenswrapper[4764]: I0127 07:17:55.963147 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:55Z","lastTransitionTime":"2026-01-27T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.065705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.066047 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.066117 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.066250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.066320 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.169874 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.170218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.170332 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.170429 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.170554 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.273355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.273910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.273998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.274076 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.274137 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.377208 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.377242 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.377250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.377264 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.377274 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.435063 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:29:22.280529337 +0000 UTC Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.479698 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.479751 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.479763 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.479781 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.479797 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.583080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.583424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.583553 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.583678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.583744 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.686166 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.686195 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.686203 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.686216 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.686225 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.788555 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.788605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.788615 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.788632 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.788645 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.891155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.891210 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.891222 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.891243 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.891256 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.994850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.994903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.994916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.994940 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:56 crc kubenswrapper[4764]: I0127 07:17:56.994953 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:56Z","lastTransitionTime":"2026-01-27T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.097728 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.097795 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.097811 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.097827 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.097840 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.136094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.136155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.136173 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.136198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.136217 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.152237 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.156666 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.156720 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.156740 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.156763 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.156779 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.175117 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.179041 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.179186 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.179205 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.179231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.179250 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.192801 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.197672 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.197737 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.197754 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.197775 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.197789 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.210527 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.215485 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.215548 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.215567 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.215592 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.215613 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.232345 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.232602 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.235284 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.235377 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.235400 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.235421 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.235461 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.338937 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.339014 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.339036 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.339064 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.339083 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.435527 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:01:46.67804194 +0000 UTC Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.438001 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.438081 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.438100 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.438096 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.438220 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.438366 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.438765 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:57 crc kubenswrapper[4764]: E0127 07:17:57.439304 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.442241 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.442316 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.442343 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.442366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.442385 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.546355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.546400 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.546413 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.546429 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.546464 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.650211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.650261 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.650278 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.650299 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.650312 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.753733 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.753787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.753799 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.753818 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.753831 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.856924 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.857308 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.857622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.857874 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.858099 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.962004 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.962083 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.962105 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.962142 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:57 crc kubenswrapper[4764]: I0127 07:17:57.962165 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:57Z","lastTransitionTime":"2026-01-27T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.066268 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.066338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.066355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.066384 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.066404 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.170114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.170607 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.170777 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.170931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.171060 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.274801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.274872 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.274895 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.274921 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.274948 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.379069 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.379115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.379128 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.379148 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.379161 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.436567 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:19:37.367018765 +0000 UTC Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.455524 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.467592 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.479634 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.483048 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.483075 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.483085 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.483102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.483112 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.490675 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.511591 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:47Z\\\",\\\"message\\\":\\\"p\\\\nI0127 07:17:47.652915 6819 factory.go:656] Stopping watch factory\\\\nI0127 07:17:47.652928 6819 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 07:17:47.652835 6819 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652937 6819 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:47.652938 6819 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652950 6819 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-lh5rf in node crc\\\\nI0127 07:17:47.652955 6819 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf after 0 failed attempt(s)\\\\nI0127 07:17:47.652959 6819 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652976 6819 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 216.486µs)\\\\nI0127 07:17:47.653008 6819 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:17:47.653031 6819 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 07:17:47.653131 6819 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.525031 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.537616 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.553059 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.572143 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.586272 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.586596 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.586685 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.586776 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.586918 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.594405 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.614571 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.626994 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.652715 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"513625cf-9542-409f-a0ad-e6b011f14b38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db8a05b1bad2810fd291ace5f0166b7061681ff4757d1b89368e48fd293f28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d530f1c58454f3ee757a0bc58fa821204e378d643a7bb3dd01d122feaf02497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91d8f3d3f5fdb82fe9c5366c970b42d538d71a763eda361b412ce4349cf2144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e3975853f90b5a1f71fa29f8ea50579644ce15a2c72a0f9debdfdcf72f2387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://060deb63e32b34b747696d5c32a61c6cf252c55733c7c0f5a3922fe54982a781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.671417 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.686573 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.689869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.689919 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.689937 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.689963 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.689983 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.697298 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.708138 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.719962 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.731274 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:17:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.793166 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.793231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.793246 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.793272 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.793290 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.896879 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.896946 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.896964 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.896984 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:58 crc kubenswrapper[4764]: I0127 07:17:58.896999 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:58Z","lastTransitionTime":"2026-01-27T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:58.999977 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.000020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.000032 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.000047 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.000058 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.102193 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.102246 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.102273 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.102296 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.102314 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.204845 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.204878 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.204891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.204906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.204915 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.306835 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.307171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.307257 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.307366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.307460 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.410383 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.410424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.410448 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.410464 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.410474 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.437847 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:10:03.388942353 +0000 UTC Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.437885 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.437922 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.437947 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.437975 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:17:59 crc kubenswrapper[4764]: E0127 07:17:59.438509 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:17:59 crc kubenswrapper[4764]: E0127 07:17:59.438689 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:17:59 crc kubenswrapper[4764]: E0127 07:17:59.438802 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:17:59 crc kubenswrapper[4764]: E0127 07:17:59.439210 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.513833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.513891 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.513909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.513934 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.513953 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.616800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.616860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.616879 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.616903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.616921 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.719929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.719989 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.720003 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.720025 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.720039 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.831701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.831778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.831801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.831836 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.831859 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.935288 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.935351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.935369 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.935393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:17:59 crc kubenswrapper[4764]: I0127 07:17:59.935409 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:17:59Z","lastTransitionTime":"2026-01-27T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.038755 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.038904 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.038933 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.038966 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.038987 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.142197 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.142262 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.142280 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.142305 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.142325 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.244714 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.244757 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.244768 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.244785 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.244800 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.348561 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.348635 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.348654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.348687 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.348705 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.438773 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:40:22.46229965 +0000 UTC Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.452065 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.452130 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.452146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.452170 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.452189 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.555911 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.555980 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.555998 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.556025 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.556047 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.660704 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.660786 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.660805 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.660833 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.660851 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.764580 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.764656 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.764674 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.764701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.764721 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.868114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.868176 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.868193 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.868219 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.868236 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.971480 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.971544 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.971561 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.971583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:00 crc kubenswrapper[4764]: I0127 07:18:00.971599 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:00Z","lastTransitionTime":"2026-01-27T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.074717 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.074764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.074775 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.074793 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.074950 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.177491 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.177554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.177570 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.177591 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.177606 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.280414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.280470 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.280480 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.280495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.280505 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.383338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.383383 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.383392 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.383407 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.383418 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.437860 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:01 crc kubenswrapper[4764]: E0127 07:18:01.438135 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.438204 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.438297 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:01 crc kubenswrapper[4764]: E0127 07:18:01.438506 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.438603 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.438894 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:52:24.115838562 +0000 UTC Jan 27 07:18:01 crc kubenswrapper[4764]: E0127 07:18:01.438910 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:01 crc kubenswrapper[4764]: E0127 07:18:01.439005 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.439272 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:18:01 crc kubenswrapper[4764]: E0127 07:18:01.439419 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.486235 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.486269 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.486280 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.486296 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.486306 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.589992 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.590085 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.590108 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.590141 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.590164 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.693885 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.693966 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.693988 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.694019 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.694044 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.798322 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.798394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.798413 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.798472 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.798497 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.902542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.902610 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.902623 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.902646 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:01 crc kubenswrapper[4764]: I0127 07:18:01.902661 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:01Z","lastTransitionTime":"2026-01-27T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.006613 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.006688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.006730 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.006750 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.006764 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.109167 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.109211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.109222 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.109237 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.109248 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.212629 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.212680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.212690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.212712 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.212722 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.314820 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.314862 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.314871 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.314885 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.314896 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.419063 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.419131 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.419146 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.419175 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.419192 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.439666 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:06:37.08418943 +0000 UTC Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.522225 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.522298 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.522317 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.522345 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.522364 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.625411 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.625505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.625517 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.625533 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.625547 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.729193 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.729266 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.729306 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.729343 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.729365 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.832094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.832231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.832252 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.832323 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.832345 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.936360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.936852 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.936871 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.936901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:02 crc kubenswrapper[4764]: I0127 07:18:02.936920 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:02Z","lastTransitionTime":"2026-01-27T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.039906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.039952 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.039962 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.039979 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.039990 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.143290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.143394 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.143416 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.143495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.143522 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.247484 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.247545 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.247563 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.247588 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.247606 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.351088 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.351177 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.351209 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.351242 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.351263 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.437592 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.437673 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:03 crc kubenswrapper[4764]: E0127 07:18:03.437741 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.437798 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.437914 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:03 crc kubenswrapper[4764]: E0127 07:18:03.438088 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:03 crc kubenswrapper[4764]: E0127 07:18:03.438256 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:03 crc kubenswrapper[4764]: E0127 07:18:03.438359 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.441657 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:49:28.15657056 +0000 UTC Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.455938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.456020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.456047 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.456078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.456100 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.559114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.559160 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.559173 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.559191 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.559207 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.661724 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.661782 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.661801 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.661829 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.661848 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.765157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.765207 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.765224 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.765244 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.765258 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.869420 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.869529 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.869558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.869593 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.869619 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.973485 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.973579 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.973604 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.973638 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:03 crc kubenswrapper[4764]: I0127 07:18:03.973661 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:03Z","lastTransitionTime":"2026-01-27T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.077395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.077462 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.077476 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.077495 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.077534 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.181045 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.181090 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.181110 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.181129 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.181140 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.283601 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.283646 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.283658 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.283677 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.283688 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.387503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.387598 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.387626 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.387669 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.387696 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.442569 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:55:18.692532316 +0000 UTC Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.491222 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.491292 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.491326 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.491360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.491383 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.595244 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.595292 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.595305 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.595324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.595339 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.699387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.699513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.699544 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.699586 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.699613 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.803051 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.803128 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.803152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.803188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.803215 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.907586 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.907665 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.907690 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.907720 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:04 crc kubenswrapper[4764]: I0127 07:18:04.907741 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:04Z","lastTransitionTime":"2026-01-27T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.011045 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.011118 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.011139 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.011171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.011193 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.113312 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.113368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.113397 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.113424 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.113467 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.217278 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.217345 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.217364 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.217393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.217412 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.320772 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.320809 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.320816 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.320834 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.320845 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.424138 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.424183 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.424193 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.424211 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.424223 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.438102 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:05 crc kubenswrapper[4764]: E0127 07:18:05.438233 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.438455 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.438491 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.438508 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:05 crc kubenswrapper[4764]: E0127 07:18:05.438766 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:05 crc kubenswrapper[4764]: E0127 07:18:05.438883 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:05 crc kubenswrapper[4764]: E0127 07:18:05.438876 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.443505 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:38:58.323833649 +0000 UTC Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.527365 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.527732 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.527828 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.527931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.528014 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.631308 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.631360 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.631373 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.631395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.631412 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.734780 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.734835 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.734848 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.734875 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.734889 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.838121 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.838198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.838215 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.838246 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.838266 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.941612 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.941649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.941658 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.941673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:05 crc kubenswrapper[4764]: I0127 07:18:05.941682 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:05Z","lastTransitionTime":"2026-01-27T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.045728 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.045783 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.045791 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.045813 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.045823 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.149653 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.149705 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.149721 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.149746 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.149763 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.253556 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.253633 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.253652 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.253679 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.253700 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.358341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.358484 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.358504 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.358535 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.358554 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.443712 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:08:25.337073665 +0000 UTC Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.462927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.463010 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.463064 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.463094 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.463113 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.566581 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.566642 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.566653 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.566674 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.566686 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.669825 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.669902 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.669919 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.669945 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.669964 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.773187 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.773236 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.773257 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.773282 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.773301 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.880362 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.880493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.880514 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.880542 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.880562 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.984198 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.984667 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.984854 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.985109 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:06 crc kubenswrapper[4764]: I0127 07:18:06.985291 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:06Z","lastTransitionTime":"2026-01-27T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.089511 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.089599 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.089616 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.089648 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.089670 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.193992 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.194041 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.194057 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.194083 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.194101 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.297647 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.297703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.297717 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.297748 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.297764 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.401820 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.401898 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.401910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.401927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.401941 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.437866 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.437990 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.438137 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.438191 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.438322 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.438398 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.438565 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.438634 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.444959 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:35:31.865181819 +0000 UTC Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.505409 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.505503 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.505522 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.505554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.505572 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.514680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.514756 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.514775 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.514803 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.514828 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.536519 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.544314 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.544502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.544543 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.544637 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.544713 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.567517 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.579967 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.580020 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.580033 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.580054 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.580066 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.598329 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.604025 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.604075 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.604084 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.604104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.604116 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.618285 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.622769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.622842 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.622869 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.622906 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.622935 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.642062 4764 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f93d49ef-43cd-4375-924e-313a995dd43d\\\",\\\"systemUUID\\\":\\\"8df70a0f-a43f-46e8-bc96-59789f0d9a1b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.642184 4764 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.644649 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.644673 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.644683 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.644701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.644712 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.724134 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.724378 4764 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:18:07 crc kubenswrapper[4764]: E0127 07:18:07.724510 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs podName:6a5473d6-3349-44a0-8a36-4112062a89a6 nodeName:}" failed. No retries permitted until 2026-01-27 07:19:11.724487407 +0000 UTC m=+164.320109933 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs") pod "network-metrics-daemon-crfqf" (UID: "6a5473d6-3349-44a0-8a36-4112062a89a6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.747675 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.747727 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.747742 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.747764 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.747779 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.852842 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.852914 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.852973 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.853007 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.853028 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.955894 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.955948 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.955958 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.955977 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:07 crc kubenswrapper[4764]: I0127 07:18:07.956013 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:07Z","lastTransitionTime":"2026-01-27T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.058234 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.058298 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.058316 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.058341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.058361 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.162240 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.162302 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.162318 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.162357 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.162374 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.265587 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.265651 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.265672 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.265701 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.265722 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.369110 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.369179 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.369200 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.369263 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.369283 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.445527 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:49:58.828456848 +0000 UTC Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.472822 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.472903 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.472931 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.472965 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.472993 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.473959 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91863a32-a5e4-42d3-9d33-d672d2f1300d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:47Z\\\",\\\"message\\\":\\\"p\\\\nI0127 07:17:47.652915 6819 factory.go:656] Stopping watch factory\\\\nI0127 07:17:47.652928 6819 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 07:17:47.652835 6819 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652937 6819 ovnkube.go:599] Stopped ovnkube\\\\nI0127 07:17:47.652938 6819 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652950 6819 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-lh5rf in node crc\\\\nI0127 07:17:47.652955 6819 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-lh5rf after 0 failed attempt(s)\\\\nI0127 07:17:47.652959 6819 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-lh5rf\\\\nI0127 07:17:47.652976 6819 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 216.486µs)\\\\nI0127 07:17:47.653008 6819 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:17:47.653031 6819 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 07:17:47.653131 6819 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:17:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf6fw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gwmsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.494802 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ca2e719f0d42bfc473f53b92aa7f90ab3cf41a572f27e38ddbfee8dee9eeff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.520425 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.544831 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.569848 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8190cdf24d33b8202ca02dbec9df02e05d56bbbea475894fdf5abeadf38caff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c03bafcddc7090d999f75d685a00751d71eae71d3edcf1e39b0ad89daf9ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.576247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.576777 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.576979 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.577282 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.577581 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.585432 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54fa6cae-5eb5-4360-9bcc-5e2e533b140d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca29063a1e55214845475f4b10f0e125b1576439af78e06f5945b117d6018a2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://934a2955551f7654e7977621bd6d07d4d7c34c667ee64c833eddd1783efec495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.612840 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"513625cf-9542-409f-a0ad-e6b011f14b38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db8a05b1bad2810fd291ace5f0166b7061681ff4757d1b89368e48fd293f28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d530f1c58454f3ee757a0bc58fa821204e378d643a7bb3dd01d122feaf02497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91d8f3d3f5fdb82fe9c5366c970b42d538d71a763eda361b412ce4349cf2144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e3975853f90b5a1f71fa29f8ea50579644ce15a2c72a0f9debdfdcf72f2387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://060deb63e32b34b747696d5c32a61c6cf252c55733c7c0f5a3922fe54982a781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75c3fbb091d06c72427beaf78eb02f106f8bc8243dbcf79bed8de0efdf6f6f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19614eb55e5db76fee77bdcb30a3efca98da0974ea3a35b1741a9f147729716c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b3e9660c7872c9a32d6da0cd73bd6f88a459d169aff5982abb180e4e1acf1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.635538 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7eb863a-3d83-4274-8fa2-1a22baf533c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f32a71d041112bc848664cf46f1f5395acd05138995533bd761d50a3ab417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9222fbac99c3cc5af94848377d3bc19f35ea9f5c4966a95fea41eb01e473f9e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792bdfe6c9fab5abeeaeda2ab55311e9a238d99038680f8c7168afc78a2e92e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.655510 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128b41b4-afeb-48fe-ba62-dc375d0363ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1d870288234ba9c729b5020e6a4079546190768272936154252b8e29d3c716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2545d6a3e22bad8bdb824be605da793d96b7a58d87bc7472ed5c4677211a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96fdfae44ff2c4bfff2278adab23bd519747d2a1dd206500a7f84b0dd0330c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab33ba1d298cdb809d6941258fe310e234e167d983239bb7f7c7ca3cad0aee1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.674052 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75860c8-bd8b-434f-b2c6-91e7b7f60638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1912344c1e175322b4827be374e07e7b557e769102c9bd2958758619c65f7c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.680554 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.680644 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.680658 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.680677 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.680721 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.693545 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d6c16d-7028-4bfc-89ed-6a3e799f2759\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e037408713c66ebbfd058753026cc4909ea5fa78e2d7713fd3fde85085253c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb614c4a71aa819bfe5fcf940ab416be78a6240b65e522df806da380dde9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f4wg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clrx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.707509 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.719685 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5262052ffd308a3a2ac3b46aa4bf3efdd2cd8f6fb43e897b7e81225b95b45f90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.734000 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dvbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e936b8fc-81d9-4222-a66f-742b2db87386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:17:37Z\\\",\\\"message\\\":\\\"2026-01-27T07:16:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06\\\\n2026-01-27T07:16:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40f9ca18-2d54-48b8-9de5-85cc0fff6d06 to /host/opt/cni/bin/\\\\n2026-01-27T07:16:52Z [verbose] multus-daemon started\\\\n2026-01-27T07:16:52Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:17:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d99ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dvbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.760717 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8be2cdf-f587-4704-9020-dcb7c8ced33d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d43f48a075ecd3deec848e292f8bc7fe49f6fc0b739aab34eeeb9b016b9fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c9689ccca12b26f559086f59bc11afefa69179e514161ca8893d828891073d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa31d135b0a5c144fc4d30d562f8afa5d4f4b87f67f0f602ef91d7df11cbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005462c517357c627162ee514964535bcab1de2ad570b1d2a6a90042e687985d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://085267abc2e9f371da2563f77cac3b41e16b42397f14c07fcae0ef922b0e1d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ef18d0b20ff65521042beee23cf067077cad647bb740b714e80c8f4640e3c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7460d82786a39e0dc03a5248630dee6f6f64e692d3845bcb9efd048c7f9eb115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8phc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lh5rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.774949 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:16:49.011132 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:16:49.011254 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:16:49.012257 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1952454789/tls.crt::/tmp/serving-cert-1952454789/tls.key\\\\\\\"\\\\nI0127 07:16:49.362464 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:16:49.366251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:16:49.366305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:16:49.366354 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:16:49.366381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:16:49.382352 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:16:49.382401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382416 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:16:49.382430 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0127 07:16:49.382466 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 07:16:49.382490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:16:49.382501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:16:49.382511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 07:16:49.383809 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.784785 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.784851 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.784924 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.784954 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.785042 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.789514 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4sbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2794da51-6825-4d02-8ed3-bc0ff88fb961\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1c0731c00c9b99c665217aca9f9af1872efe043dade3636f72650df7ff90fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pgrr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4sbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.804939 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a061a513-f05f-4aa7-8310-5e418f3f747d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9c4e2d2155603cc960caf54e8fb0764cd383c69ddf34531915f24c70f8ca0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhqqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:16:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8qgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.820412 4764 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-crfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a5473d6-3349-44a0-8a36-4112062a89a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxx6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:17:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-crfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:18:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.887370 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.887425 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.887475 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.887502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.887520 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.990990 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.991034 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.991046 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.991064 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:08 crc kubenswrapper[4764]: I0127 07:18:08.991075 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:08Z","lastTransitionTime":"2026-01-27T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.094387 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.094493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.094507 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.094528 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.094540 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.198285 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.198332 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.198342 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.198364 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.198375 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.301558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.301632 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.301652 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.301685 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.301740 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.405135 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.405209 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.405230 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.405259 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.405282 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.437926 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.438056 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.438086 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:09 crc kubenswrapper[4764]: E0127 07:18:09.438307 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.438367 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:09 crc kubenswrapper[4764]: E0127 07:18:09.438591 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:09 crc kubenswrapper[4764]: E0127 07:18:09.438711 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:09 crc kubenswrapper[4764]: E0127 07:18:09.439023 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.446310 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:46:15.555351487 +0000 UTC Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.509513 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.509589 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.509611 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.509645 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.509670 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.613288 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.613351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.613371 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.613399 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.613419 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.717680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.717750 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.717769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.717798 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.717822 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.821272 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.821351 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.821368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.821396 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.821415 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.925612 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.925678 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.925695 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.925720 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:09 crc kubenswrapper[4764]: I0127 07:18:09.925737 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:09Z","lastTransitionTime":"2026-01-27T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.029115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.029204 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.029223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.029253 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.029278 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.133317 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.133386 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.133405 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.133431 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.133488 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.237102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.237179 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.237218 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.237254 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.237278 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.341355 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.341426 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.341487 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.341523 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.341543 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.444358 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.444410 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.444423 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.444458 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.444471 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.446511 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:37:15.028703779 +0000 UTC Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.547927 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.547981 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.547992 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.548012 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.548025 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.650957 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.651040 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.651063 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.651092 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.651111 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.762099 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.762811 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.763016 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.763192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.763336 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.866378 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.866493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.866515 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.866577 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.866598 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.970324 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.970401 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.970421 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.970502 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:10 crc kubenswrapper[4764]: I0127 07:18:10.970528 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:10Z","lastTransitionTime":"2026-01-27T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.075026 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.075098 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.075115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.075140 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.075158 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.178624 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.178674 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.178715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.178741 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.178760 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.281787 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.281850 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.281868 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.281901 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.281920 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.384748 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.384794 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.384806 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.384866 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.384879 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.437554 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:11 crc kubenswrapper[4764]: E0127 07:18:11.437764 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.438015 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.438085 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.438147 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:11 crc kubenswrapper[4764]: E0127 07:18:11.438272 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:11 crc kubenswrapper[4764]: E0127 07:18:11.438375 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:11 crc kubenswrapper[4764]: E0127 07:18:11.438722 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.447501 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:30:05.016596695 +0000 UTC Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.487681 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.487746 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.487767 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.487792 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.487816 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.590982 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.591066 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.591090 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.591123 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.591147 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.694826 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.695703 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.695766 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.695799 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.695824 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.797916 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.797955 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.797965 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.797981 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.797991 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.901654 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.901738 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.901767 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.901800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:11 crc kubenswrapper[4764]: I0127 07:18:11.901823 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:11Z","lastTransitionTime":"2026-01-27T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.005516 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.005583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.005605 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.005636 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.005660 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.109078 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.109157 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.109181 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.109217 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.109244 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.212277 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.212375 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.212396 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.212421 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.212494 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.316072 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.316141 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.316162 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.316189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.316213 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.419618 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.419656 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.419668 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.419688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.419700 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.447879 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:45:02.051469312 +0000 UTC Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.524042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.524129 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.524151 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.524185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.524207 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.627412 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.627622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.627643 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.627675 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.627693 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.731715 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.731800 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.731826 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.731857 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.731883 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.835315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.835627 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.835663 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.835697 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.835719 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.940168 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.940243 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.940262 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.940289 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:12 crc kubenswrapper[4764]: I0127 07:18:12.940309 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:12Z","lastTransitionTime":"2026-01-27T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.043174 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.043247 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.043258 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.043299 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.043313 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.146114 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.146223 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.146243 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.146271 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.146294 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.249688 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.249771 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.249796 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.249831 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.249854 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.353234 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.353309 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.353332 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.353370 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.353395 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.437651 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.437700 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.437739 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:13 crc kubenswrapper[4764]: E0127 07:18:13.437858 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.437981 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:13 crc kubenswrapper[4764]: E0127 07:18:13.438146 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:13 crc kubenswrapper[4764]: E0127 07:18:13.438302 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:13 crc kubenswrapper[4764]: E0127 07:18:13.438414 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.448602 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:46:23.857548038 +0000 UTC Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.456406 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.456483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.456505 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.456532 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.456550 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.559315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.559383 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.559405 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.559431 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.559491 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.663042 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.663152 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.663171 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.663196 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.663213 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.767558 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.767633 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.767659 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.767694 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.767719 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.872014 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.872163 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.872186 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.872240 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.872266 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.975243 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.975307 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.975319 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.975338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:13 crc kubenswrapper[4764]: I0127 07:18:13.975351 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:13Z","lastTransitionTime":"2026-01-27T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.079302 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.079393 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.079414 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.079476 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.079500 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.183099 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.183170 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.183188 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.183212 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.183229 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.286482 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.286541 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.286560 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.286583 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.286600 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.390101 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.390235 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.390255 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.390279 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.390298 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.449405 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:07:45.070687163 +0000 UTC Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.493033 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.493091 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.493102 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.493179 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.493194 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.597207 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.597594 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.597823 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.597926 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.598007 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.701585 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.701680 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.701700 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.701760 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.701778 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.806147 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.806622 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.806771 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.806894 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.807020 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.910366 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.910737 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.910843 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.910938 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:14 crc kubenswrapper[4764]: I0127 07:18:14.911032 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:14Z","lastTransitionTime":"2026-01-27T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.013388 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.013461 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.013474 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.013493 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.013505 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.116068 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.116137 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.116155 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.116180 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.116198 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.219368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.219463 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.219481 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.219508 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.219528 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.322484 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.322584 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.322601 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.322625 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.322642 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.425961 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.426058 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.426076 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.426107 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.426132 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.437831 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.437864 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.437997 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.438070 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:15 crc kubenswrapper[4764]: E0127 07:18:15.438107 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:15 crc kubenswrapper[4764]: E0127 07:18:15.438295 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:15 crc kubenswrapper[4764]: E0127 07:18:15.438585 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:15 crc kubenswrapper[4764]: E0127 07:18:15.438745 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.440009 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:18:15 crc kubenswrapper[4764]: E0127 07:18:15.440286 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gwmsf_openshift-ovn-kubernetes(91863a32-a5e4-42d3-9d33-d672d2f1300d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.450569 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:22:27.181977215 +0000 UTC Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.530368 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.530480 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.530508 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.530538 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.530563 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.634038 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.634104 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.634123 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.634149 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.634168 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.737121 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.737192 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.737213 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.737237 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.737258 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.840829 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.840910 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.840929 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.840959 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.840984 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.944769 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.944860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.944881 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.944912 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:15 crc kubenswrapper[4764]: I0127 07:18:15.944937 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:15Z","lastTransitionTime":"2026-01-27T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.048086 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.048151 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.048163 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.048184 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.048198 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.152080 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.152338 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.152359 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.152386 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.152404 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.258707 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.258813 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.258839 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.258874 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.258908 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.361655 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.361697 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.361707 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.361722 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.361732 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.451255 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:24:56.60949384 +0000 UTC Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.464861 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.464896 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.464905 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.464919 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.464931 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.568032 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.568093 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.568115 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.568138 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.568156 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.671085 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.671163 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.671185 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.671212 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.671233 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.773354 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.773395 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.773406 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.773423 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.773457 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.878159 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.878231 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.878251 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.878275 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.878294 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.982186 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.982250 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.982269 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.982290 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:16 crc kubenswrapper[4764]: I0127 07:18:16.982303 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:16Z","lastTransitionTime":"2026-01-27T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.085258 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.085315 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.085325 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.085341 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.085353 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.188847 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.188917 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.188936 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.188964 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.188986 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.291883 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.291923 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.291934 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.291952 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.291964 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.395778 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.395814 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.395824 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.395841 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.395853 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.437668 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.437754 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.437667 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:17 crc kubenswrapper[4764]: E0127 07:18:17.437808 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.437667 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:17 crc kubenswrapper[4764]: E0127 07:18:17.437954 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:17 crc kubenswrapper[4764]: E0127 07:18:17.438106 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:17 crc kubenswrapper[4764]: E0127 07:18:17.438561 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.452121 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:13:19.180056322 +0000 UTC Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.498773 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.498807 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.498820 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.498837 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.498852 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.601380 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.601429 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.601483 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.601514 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.601538 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.704873 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.705165 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.705189 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.705238 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.705261 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.758860 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.758899 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.758909 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.758926 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.758937 4764 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:18:17Z","lastTransitionTime":"2026-01-27T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.834804 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd"] Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.835772 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.839825 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.840378 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.841562 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.842046 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.847868 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.847933 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42043728-c0da-432c-8ba1-926b63a46945-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.847995 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.848035 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42043728-c0da-432c-8ba1-926b63a46945-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.848096 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42043728-c0da-432c-8ba1-926b63a46945-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.904220 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2dvbb" podStartSLOduration=88.904190344 podStartE2EDuration="1m28.904190344s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:17.865428453 +0000 UTC m=+110.461051019" watchObservedRunningTime="2026-01-27 07:18:17.904190344 +0000 UTC m=+110.499812880" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.926968 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xfxc7" podStartSLOduration=88.926942097 podStartE2EDuration="1m28.926942097s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:17.92631679 +0000 UTC m=+110.521939336" watchObservedRunningTime="2026-01-27 07:18:17.926942097 +0000 UTC m=+110.522564633" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.927194 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lh5rf" podStartSLOduration=88.927186694 podStartE2EDuration="1m28.927186694s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:17.905096179 +0000 UTC m=+110.500718715" watchObservedRunningTime="2026-01-27 07:18:17.927186694 +0000 UTC m=+110.522809230" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.946705 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clrx9" podStartSLOduration=87.946671797 podStartE2EDuration="1m27.946671797s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:17.946616966 +0000 UTC m=+110.542239532" watchObservedRunningTime="2026-01-27 07:18:17.946671797 +0000 UTC m=+110.542294333" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949341 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949395 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42043728-c0da-432c-8ba1-926b63a46945-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949424 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949492 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42043728-c0da-432c-8ba1-926b63a46945-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949528 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42043728-c0da-432c-8ba1-926b63a46945-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.949915 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.951008 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42043728-c0da-432c-8ba1-926b63a46945-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.951073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42043728-c0da-432c-8ba1-926b63a46945-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.968578 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42043728-c0da-432c-8ba1-926b63a46945-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:17 crc kubenswrapper[4764]: I0127 07:18:17.970248 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42043728-c0da-432c-8ba1-926b63a46945-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xtljd\" (UID: \"42043728-c0da-432c-8ba1-926b63a46945\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.026111 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podStartSLOduration=89.026086852 podStartE2EDuration="1m29.026086852s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.011902053 +0000 UTC m=+110.607524669" watchObservedRunningTime="2026-01-27 07:18:18.026086852 +0000 UTC m=+110.621709378" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.051877 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.051851547 podStartE2EDuration="1m29.051851547s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.051476517 +0000 UTC m=+110.647099043" watchObservedRunningTime="2026-01-27 07:18:18.051851547 +0000 UTC m=+110.647474093" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.090486 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4sbqw" podStartSLOduration=90.090460914 podStartE2EDuration="1m30.090460914s" podCreationTimestamp="2026-01-27 07:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.065610034 +0000 UTC m=+110.661232560" watchObservedRunningTime="2026-01-27 07:18:18.090460914 +0000 UTC m=+110.686083450" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.125191 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.125169314 podStartE2EDuration="1m27.125169314s" podCreationTimestamp="2026-01-27 07:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.124475955 +0000 UTC m=+110.720098501" watchObservedRunningTime="2026-01-27 07:18:18.125169314 +0000 UTC m=+110.720791840" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.137163 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.137150372 podStartE2EDuration="1m2.137150372s" podCreationTimestamp="2026-01-27 07:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.136554186 +0000 UTC m=+110.732176712" watchObservedRunningTime="2026-01-27 07:18:18.137150372 +0000 UTC m=+110.732772898" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.156498 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.216876 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.216853324 podStartE2EDuration="40.216853324s" podCreationTimestamp="2026-01-27 07:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.215685923 +0000 UTC m=+110.811308449" watchObservedRunningTime="2026-01-27 07:18:18.216853324 +0000 UTC m=+110.812475850" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.243465 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=25.243425582 podStartE2EDuration="25.243425582s" podCreationTimestamp="2026-01-27 07:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:18.24335948 +0000 UTC m=+110.838982006" watchObservedRunningTime="2026-01-27 07:18:18.243425582 +0000 UTC m=+110.839048108" Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.453018 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:08:53.813850541 +0000 UTC Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.453699 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 07:18:18 crc kubenswrapper[4764]: I0127 07:18:18.467484 4764 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.169299 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" event={"ID":"42043728-c0da-432c-8ba1-926b63a46945","Type":"ContainerStarted","Data":"7e651014a9f67bcf60d5938eaa32ec9418a48b947567aa6780b84578f4911386"} Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.169383 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" event={"ID":"42043728-c0da-432c-8ba1-926b63a46945","Type":"ContainerStarted","Data":"6511ee8e3ce777f7a82ce16b11bb2a4bd9f1c511b5ed0cdfa12c51662383ea2b"} Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.437532 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.437577 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.437549 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:19 crc kubenswrapper[4764]: I0127 07:18:19.437540 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:19 crc kubenswrapper[4764]: E0127 07:18:19.437688 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:19 crc kubenswrapper[4764]: E0127 07:18:19.437811 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:19 crc kubenswrapper[4764]: E0127 07:18:19.437966 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:19 crc kubenswrapper[4764]: E0127 07:18:19.438074 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:21 crc kubenswrapper[4764]: I0127 07:18:21.438186 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:21 crc kubenswrapper[4764]: I0127 07:18:21.438291 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:21 crc kubenswrapper[4764]: E0127 07:18:21.438373 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:21 crc kubenswrapper[4764]: I0127 07:18:21.438212 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:21 crc kubenswrapper[4764]: E0127 07:18:21.438554 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:21 crc kubenswrapper[4764]: E0127 07:18:21.438780 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:21 crc kubenswrapper[4764]: I0127 07:18:21.439538 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:21 crc kubenswrapper[4764]: E0127 07:18:21.445552 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:23 crc kubenswrapper[4764]: I0127 07:18:23.438324 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:23 crc kubenswrapper[4764]: I0127 07:18:23.438404 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:23 crc kubenswrapper[4764]: I0127 07:18:23.438360 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:23 crc kubenswrapper[4764]: E0127 07:18:23.438576 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:23 crc kubenswrapper[4764]: I0127 07:18:23.438493 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:23 crc kubenswrapper[4764]: E0127 07:18:23.438687 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:23 crc kubenswrapper[4764]: E0127 07:18:23.438784 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:23 crc kubenswrapper[4764]: E0127 07:18:23.439014 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.194884 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/1.log" Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.196323 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/0.log" Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.196415 4764 generic.go:334] "Generic (PLEG): container finished" podID="e936b8fc-81d9-4222-a66f-742b2db87386" containerID="edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096" exitCode=1 Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.196562 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerDied","Data":"edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096"} Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.196626 4764 scope.go:117] "RemoveContainer" containerID="1a9a033c6297c579c8a3b8c207efd6e838ab3a0166f2013c2a0a7b383c9c9999" Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.197245 4764 scope.go:117] "RemoveContainer" containerID="edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096" Jan 27 07:18:24 crc kubenswrapper[4764]: E0127 07:18:24.197523 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2dvbb_openshift-multus(e936b8fc-81d9-4222-a66f-742b2db87386)\"" pod="openshift-multus/multus-2dvbb" podUID="e936b8fc-81d9-4222-a66f-742b2db87386" Jan 27 07:18:24 crc kubenswrapper[4764]: I0127 07:18:24.252488 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xtljd" podStartSLOduration=95.252461249 podStartE2EDuration="1m35.252461249s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:19.186842011 +0000 UTC m=+111.782464547" watchObservedRunningTime="2026-01-27 07:18:24.252461249 +0000 UTC m=+116.848083795" Jan 27 07:18:25 crc kubenswrapper[4764]: I0127 07:18:25.204156 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/1.log" Jan 27 07:18:25 crc kubenswrapper[4764]: I0127 07:18:25.437524 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:25 crc kubenswrapper[4764]: I0127 07:18:25.437579 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:25 crc kubenswrapper[4764]: I0127 07:18:25.437566 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:25 crc kubenswrapper[4764]: I0127 07:18:25.437696 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:25 crc kubenswrapper[4764]: E0127 07:18:25.437876 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:25 crc kubenswrapper[4764]: E0127 07:18:25.438089 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:25 crc kubenswrapper[4764]: E0127 07:18:25.438267 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:25 crc kubenswrapper[4764]: E0127 07:18:25.438367 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:27 crc kubenswrapper[4764]: I0127 07:18:27.437566 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:27 crc kubenswrapper[4764]: I0127 07:18:27.437672 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:27 crc kubenswrapper[4764]: E0127 07:18:27.437775 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:27 crc kubenswrapper[4764]: I0127 07:18:27.437565 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:27 crc kubenswrapper[4764]: I0127 07:18:27.437596 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:27 crc kubenswrapper[4764]: E0127 07:18:27.437926 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:27 crc kubenswrapper[4764]: E0127 07:18:27.438058 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:27 crc kubenswrapper[4764]: E0127 07:18:27.438156 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:28 crc kubenswrapper[4764]: E0127 07:18:28.387583 4764 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 07:18:28 crc kubenswrapper[4764]: I0127 07:18:28.440083 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:18:28 crc kubenswrapper[4764]: E0127 07:18:28.544014 4764 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.222391 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/3.log" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.225673 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerStarted","Data":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.226181 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.413321 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podStartSLOduration=100.413263211 podStartE2EDuration="1m40.413263211s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:29.26017276 +0000 UTC m=+121.855795326" watchObservedRunningTime="2026-01-27 07:18:29.413263211 +0000 UTC m=+122.008885747" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.415362 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-crfqf"] Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.415659 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:29 crc kubenswrapper[4764]: E0127 07:18:29.415879 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.438072 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:29 crc kubenswrapper[4764]: E0127 07:18:29.438316 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.438673 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:29 crc kubenswrapper[4764]: I0127 07:18:29.438676 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:29 crc kubenswrapper[4764]: E0127 07:18:29.438964 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:29 crc kubenswrapper[4764]: E0127 07:18:29.438799 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:31 crc kubenswrapper[4764]: I0127 07:18:31.438172 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:31 crc kubenswrapper[4764]: I0127 07:18:31.438288 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:31 crc kubenswrapper[4764]: I0127 07:18:31.438299 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:31 crc kubenswrapper[4764]: I0127 07:18:31.438613 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:31 crc kubenswrapper[4764]: E0127 07:18:31.438551 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:31 crc kubenswrapper[4764]: E0127 07:18:31.438774 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:31 crc kubenswrapper[4764]: E0127 07:18:31.438915 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:31 crc kubenswrapper[4764]: E0127 07:18:31.439163 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:33 crc kubenswrapper[4764]: I0127 07:18:33.437985 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:33 crc kubenswrapper[4764]: I0127 07:18:33.438091 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:33 crc kubenswrapper[4764]: I0127 07:18:33.437985 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:33 crc kubenswrapper[4764]: I0127 07:18:33.438101 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:33 crc kubenswrapper[4764]: E0127 07:18:33.438225 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:33 crc kubenswrapper[4764]: E0127 07:18:33.438485 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:33 crc kubenswrapper[4764]: E0127 07:18:33.438614 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:33 crc kubenswrapper[4764]: E0127 07:18:33.438789 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:33 crc kubenswrapper[4764]: E0127 07:18:33.547198 4764 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 07:18:35 crc kubenswrapper[4764]: I0127 07:18:35.437940 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:35 crc kubenswrapper[4764]: I0127 07:18:35.438032 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:35 crc kubenswrapper[4764]: I0127 07:18:35.438211 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:35 crc kubenswrapper[4764]: I0127 07:18:35.438345 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:35 crc kubenswrapper[4764]: I0127 07:18:35.438664 4764 scope.go:117] "RemoveContainer" containerID="edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096" Jan 27 07:18:35 crc kubenswrapper[4764]: E0127 07:18:35.438899 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:35 crc kubenswrapper[4764]: E0127 07:18:35.438985 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:35 crc kubenswrapper[4764]: E0127 07:18:35.439125 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:35 crc kubenswrapper[4764]: E0127 07:18:35.439202 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:36 crc kubenswrapper[4764]: I0127 07:18:36.259101 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/1.log" Jan 27 07:18:36 crc kubenswrapper[4764]: I0127 07:18:36.259194 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerStarted","Data":"14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201"} Jan 27 07:18:37 crc kubenswrapper[4764]: I0127 07:18:37.438127 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:37 crc kubenswrapper[4764]: I0127 07:18:37.438191 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:37 crc kubenswrapper[4764]: I0127 07:18:37.438290 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:37 crc kubenswrapper[4764]: E0127 07:18:37.438384 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:18:37 crc kubenswrapper[4764]: I0127 07:18:37.438432 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:37 crc kubenswrapper[4764]: E0127 07:18:37.438643 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:18:37 crc kubenswrapper[4764]: E0127 07:18:37.438800 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-crfqf" podUID="6a5473d6-3349-44a0-8a36-4112062a89a6" Jan 27 07:18:37 crc kubenswrapper[4764]: E0127 07:18:37.438893 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:18:38 crc kubenswrapper[4764]: I0127 07:18:38.314225 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.438141 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.438141 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.438302 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.438359 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.442564 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.442834 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.442981 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.443072 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.442848 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:18:39 crc kubenswrapper[4764]: I0127 07:18:39.445791 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.783580 4764 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.835083 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-drm8b"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.836297 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.840593 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.840771 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.841007 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.841130 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.841672 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.841782 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.841920 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.842033 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.842345 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.842929 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.843992 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.845580 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846316 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846429 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846649 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846653 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846745 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.846887 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4n9lw"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.847765 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.850240 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.851409 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.853063 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.853200 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.856490 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.856820 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.867807 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.868073 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.873137 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.873551 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.874015 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.874043 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.877043 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.879406 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.880179 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.880696 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.881728 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.889118 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.890850 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.891729 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.892513 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.894795 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.907775 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.908107 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.908454 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.908670 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.909633 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.909707 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.912759 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.913363 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.913830 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.913940 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.915099 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.916135 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-54w2m"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.916637 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-drm8b"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.916691 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.917088 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fdndb"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.917715 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.918470 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.918471 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.918926 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.919006 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.919053 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kpqkv"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.919373 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.919404 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.920168 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.920888 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.921214 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.921322 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.921939 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.922063 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.924517 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.924978 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2jhk"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.925280 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-989np"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.925607 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.925884 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.926133 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.927641 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.927791 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928073 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928190 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928260 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928275 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928340 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928389 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.928541 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932674 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932800 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932817 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932822 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932899 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.932982 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.933021 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.933292 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.933750 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.933882 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934196 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934287 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934477 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934727 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934751 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934792 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934893 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.934934 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.935287 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.935463 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.936583 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.937192 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.938192 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.938680 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.938955 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.938994 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939023 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hkd\" (UniqueName: \"kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939047 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fdc696b-b0c9-4452-80d5-e816379cf155-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939065 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjjj5\" (UniqueName: \"kubernetes.io/projected/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-kube-api-access-bjjj5\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939084 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939497 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939568 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-default-certificate\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939604 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939617 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939659 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzqh2\" (UniqueName: \"kubernetes.io/projected/96899fff-2f84-46c2-88ad-f627372bb70a-kube-api-access-nzqh2\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939701 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-machine-approver-tls\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939708 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939765 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-metrics-certs\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939821 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939829 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939854 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939818 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939895 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mg6d\" (UniqueName: \"kubernetes.io/projected/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-kube-api-access-4mg6d\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939913 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939938 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939955 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939965 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939973 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-images\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.939994 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/877af0e5-70e7-49e5-8ed8-2073cfca18d5-kube-api-access-mk54c\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940018 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940046 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940051 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940070 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-serving-cert\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940087 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-client\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940110 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940135 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940159 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940162 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/877af0e5-70e7-49e5-8ed8-2073cfca18d5-proxy-tls\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940187 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-auth-proxy-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940404 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvbk\" (UniqueName: \"kubernetes.io/projected/c0c0690a-4da4-49d6-9376-21d558d9df3c-kube-api-access-jxvbk\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940430 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4496d-64fd-4915-8041-de29bae3a018-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940475 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-256nd\" (UniqueName: \"kubernetes.io/projected/5977cde5-6561-49a6-923d-74f32d8d74a2-kube-api-access-256nd\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940523 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940530 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4496d-64fd-4915-8041-de29bae3a018-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940159 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940632 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940684 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940687 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940714 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940743 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8k8m\" (UniqueName: \"kubernetes.io/projected/773a03ba-4a88-45c8-99f2-3fcc582e31a0-kube-api-access-v8k8m\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940770 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-config\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940815 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jv9\" (UniqueName: \"kubernetes.io/projected/b866a424-f51d-42cc-9ac6-0656d94083b0-kube-api-access-w8jv9\") pod \"downloads-7954f5f757-989np\" (UID: \"b866a424-f51d-42cc-9ac6-0656d94083b0\") " pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940895 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5977cde5-6561-49a6-923d-74f32d8d74a2-service-ca-bundle\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940905 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940925 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.940962 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941029 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkpsc\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-kube-api-access-pkpsc\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941156 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-config\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941297 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542909e2-f33d-43bc-802b-bfac545976d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941349 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtxm\" (UniqueName: \"kubernetes.io/projected/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-kube-api-access-lwtxm\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941377 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8995c5d6-63a9-4be5-8186-f5b46f750cd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.941401 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c6nn\" (UniqueName: \"kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942591 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942654 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942716 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-serving-cert\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942756 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfln\" (UniqueName: \"kubernetes.io/projected/542909e2-f33d-43bc-802b-bfac545976d6-kube-api-access-jgfln\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942788 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-policies\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942815 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdrn\" (UniqueName: \"kubernetes.io/projected/5fdc696b-b0c9-4452-80d5-e816379cf155-kube-api-access-wcdrn\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942841 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942868 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942900 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-encryption-config\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942925 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxc7\" (UniqueName: \"kubernetes.io/projected/e0db570f-f1ef-4d89-a3b0-1773a1a42630-kube-api-access-xqxc7\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942948 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-serving-cert\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942972 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8995c5d6-63a9-4be5-8186-f5b46f750cd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.942997 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943025 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-config\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943051 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943073 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943160 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf25\" (UniqueName: \"kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943208 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-stats-auth\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943231 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943300 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-client\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943350 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943379 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943404 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-node-pullsecrets\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-client\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943530 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/773a03ba-4a88-45c8-99f2-3fcc582e31a0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943564 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-image-import-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943588 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit-dir\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943621 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96899fff-2f84-46c2-88ad-f627372bb70a-serving-cert\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943645 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-dir\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943679 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-service-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943724 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943772 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-images\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943842 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943887 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943935 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542909e2-f33d-43bc-802b-bfac545976d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943960 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-service-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.943984 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-encryption-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.944029 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4496d-64fd-4915-8041-de29bae3a018-config\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.944081 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.944111 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.944161 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.948266 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-trz85"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.948491 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.949372 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.950040 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.950214 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.950372 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.951142 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.962590 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.966346 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.968027 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.970333 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.975455 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.979827 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.986261 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.986719 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.987676 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.988517 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.988894 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.989262 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.989645 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.989812 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.993579 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.994418 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.994678 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.997835 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.997997 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8"] Jan 27 07:18:48 crc kubenswrapper[4764]: I0127 07:18:48.999242 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d45gk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.000633 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.001623 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.001864 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.001908 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.001940 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.002284 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.002428 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.003496 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.004222 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.006724 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4xkk6"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.010916 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bdfcp"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.011747 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.011147 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.011902 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.015098 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.015652 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.016044 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-258fq"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.016534 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.016621 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.017563 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.018004 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.018276 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.019523 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.019909 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.019990 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.023644 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.024356 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.024624 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.025397 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.025776 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.026278 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.026303 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.026341 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.027417 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-znvps"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.041417 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.041497 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fdndb"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.041657 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.048700 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.048754 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.050881 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4496d-64fd-4915-8041-de29bae3a018-config\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.051019 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.051139 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d61e39-152a-497b-93ca-dec64d1a9849-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052686 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052795 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjjj5\" (UniqueName: \"kubernetes.io/projected/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-kube-api-access-bjjj5\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052895 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbbc\" (UniqueName: \"kubernetes.io/projected/9ab9c597-592a-43bf-a3a1-f24bfabaab39-kube-api-access-vbbbc\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052983 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-machine-approver-tls\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053067 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053145 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8518827-9b43-4e92-8816-5e0af41bbfee-tmpfs\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053226 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053304 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvvt\" (UniqueName: \"kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053403 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/877af0e5-70e7-49e5-8ed8-2073cfca18d5-kube-api-access-mk54c\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053518 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053603 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-key\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053707 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-client\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053795 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053873 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/877af0e5-70e7-49e5-8ed8-2073cfca18d5-proxy-tls\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.053960 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d61e39-152a-497b-93ca-dec64d1a9849-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054045 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-auth-proxy-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054130 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvbk\" (UniqueName: \"kubernetes.io/projected/c0c0690a-4da4-49d6-9376-21d558d9df3c-kube-api-access-jxvbk\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054208 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4496d-64fd-4915-8041-de29bae3a018-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054288 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc9b688-8bcc-404b-9688-cfcd405b8075-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054369 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-256nd\" (UniqueName: \"kubernetes.io/projected/5977cde5-6561-49a6-923d-74f32d8d74a2-kube-api-access-256nd\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054633 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4496d-64fd-4915-8041-de29bae3a018-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054770 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054879 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.054962 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8k8m\" (UniqueName: \"kubernetes.io/projected/773a03ba-4a88-45c8-99f2-3fcc582e31a0-kube-api-access-v8k8m\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055052 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-config\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055137 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055215 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055307 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055394 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055498 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055606 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08928b4f-dcd5-4b90-837d-ba7f80007ba0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055692 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqlk\" (UniqueName: \"kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055788 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.055944 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056134 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8995c5d6-63a9-4be5-8186-f5b46f750cd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056382 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056446 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056480 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056620 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfln\" (UniqueName: \"kubernetes.io/projected/542909e2-f33d-43bc-802b-bfac545976d6-kube-api-access-jgfln\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056655 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-policies\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056681 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxc7\" (UniqueName: \"kubernetes.io/projected/e0db570f-f1ef-4d89-a3b0-1773a1a42630-kube-api-access-xqxc7\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056701 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f01849b3-05bc-4e0d-b130-7c9c426e7979-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056730 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-config\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056882 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szd9g\" (UniqueName: \"kubernetes.io/projected/4eb3f297-a2e8-4567-9953-8141e93ce37a-kube-api-access-szd9g\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056919 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pt2z\" (UniqueName: \"kubernetes.io/projected/f57af491-613c-4af2-9ae6-18ba05d35ca8-kube-api-access-5pt2z\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056951 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-stats-auth\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.056974 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057012 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/773a03ba-4a88-45c8-99f2-3fcc582e31a0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057039 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-client\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057063 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-client\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057091 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-image-import-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057871 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit-dir\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057942 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.057991 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058042 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-dir\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058089 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-service-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058125 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058177 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058213 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058258 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542909e2-f33d-43bc-802b-bfac545976d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058300 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-service-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058330 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-encryption-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058367 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058402 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058465 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058803 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskkx\" (UniqueName: \"kubernetes.io/projected/72d61e39-152a-497b-93ca-dec64d1a9849-kube-api-access-lskkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058851 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058901 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058939 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6hkd\" (UniqueName: \"kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.058977 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fdc696b-b0c9-4452-80d5-e816379cf155-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059035 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059073 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb6jh\" (UniqueName: \"kubernetes.io/projected/f01849b3-05bc-4e0d-b130-7c9c426e7979-kube-api-access-lb6jh\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059116 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-default-certificate\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059151 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059190 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgxj\" (UniqueName: \"kubernetes.io/projected/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-kube-api-access-mwgxj\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059224 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzqh2\" (UniqueName: \"kubernetes.io/projected/96899fff-2f84-46c2-88ad-f627372bb70a-kube-api-access-nzqh2\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059259 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-metrics-certs\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059292 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059329 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jbc\" (UniqueName: \"kubernetes.io/projected/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-kube-api-access-v6jbc\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059358 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059391 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mg6d\" (UniqueName: \"kubernetes.io/projected/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-kube-api-access-4mg6d\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059424 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059474 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059516 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-images\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059553 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059606 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-serving-cert\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059650 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059694 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059734 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08928b4f-dcd5-4b90-837d-ba7f80007ba0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059770 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059802 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059839 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-cabundle\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059879 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059918 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcs5j\" (UniqueName: \"kubernetes.io/projected/cdacf66a-e67e-4d9b-b080-40c0910efda9-kube-api-access-rcs5j\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059957 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jv9\" (UniqueName: \"kubernetes.io/projected/b866a424-f51d-42cc-9ac6-0656d94083b0-kube-api-access-w8jv9\") pod \"downloads-7954f5f757-989np\" (UID: \"b866a424-f51d-42cc-9ac6-0656d94083b0\") " pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.059997 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5977cde5-6561-49a6-923d-74f32d8d74a2-service-ca-bundle\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060034 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-trusted-ca\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060100 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkpsc\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-kube-api-access-pkpsc\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060140 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-config\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060177 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542909e2-f33d-43bc-802b-bfac545976d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060215 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtxm\" (UniqueName: \"kubernetes.io/projected/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-kube-api-access-lwtxm\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060253 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnn4\" (UniqueName: \"kubernetes.io/projected/a8518827-9b43-4e92-8816-5e0af41bbfee-kube-api-access-znnn4\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060296 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c6nn\" (UniqueName: \"kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060343 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060392 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-serving-cert\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060529 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060808 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.060882 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052083 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.052177 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4496d-64fd-4915-8041-de29bae3a018-config\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.061365 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4n9lw"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.061645 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f01849b3-05bc-4e0d-b130-7c9c426e7979-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.063728 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-audit-dir\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.063858 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-dir\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.065035 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.065890 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.066593 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-auth-proxy-config\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.075983 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.076151 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-service-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.076176 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.077009 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-image-import-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.077426 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.078402 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-machine-approver-tls\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.078847 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.078994 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079225 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-service-ca-bundle\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079556 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079582 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.065320 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-srv-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079615 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-serving-cert\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079684 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdrn\" (UniqueName: \"kubernetes.io/projected/5fdc696b-b0c9-4452-80d5-e816379cf155-kube-api-access-wcdrn\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079722 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079750 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079794 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079756 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079852 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-config\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079878 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-encryption-config\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079902 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-serving-cert\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079920 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8995c5d6-63a9-4be5-8186-f5b46f750cd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079941 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079963 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.079988 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080023 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdf25\" (UniqueName: \"kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080054 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080082 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/877af0e5-70e7-49e5-8ed8-2073cfca18d5-images\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080230 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.080275 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-config\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.082479 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-config\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.082605 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-client\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.082839 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-stats-auth\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.083745 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-audit-policies\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084061 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-metrics-certs\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084203 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084306 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084414 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084514 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-node-pullsecrets\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084588 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96899fff-2f84-46c2-88ad-f627372bb70a-serving-cert\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.084673 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-images\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.088767 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjpt\" (UniqueName: \"kubernetes.io/projected/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-kube-api-access-chjpt\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.088860 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.088944 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.085315 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.085983 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/773a03ba-4a88-45c8-99f2-3fcc582e31a0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.089252 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb3f297-a2e8-4567-9953-8141e93ce37a-serving-cert\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.089808 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-client\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.089973 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090135 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-serving-cert\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090155 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0c0690a-4da4-49d6-9376-21d558d9df3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090169 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090164 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090505 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.090722 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.091074 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0c0690a-4da4-49d6-9376-21d558d9df3c-node-pullsecrets\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.091289 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.091730 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fdc696b-b0c9-4452-80d5-e816379cf155-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.092038 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0c0690a-4da4-49d6-9376-21d558d9df3c-encryption-config\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.092162 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8995c5d6-63a9-4be5-8186-f5b46f750cd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.092947 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0db570f-f1ef-4d89-a3b0-1773a1a42630-serving-cert\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.093418 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4496d-64fd-4915-8041-de29bae3a018-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.093776 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094061 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5977cde5-6561-49a6-923d-74f32d8d74a2-service-ca-bundle\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094125 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094167 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094185 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094272 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96899fff-2f84-46c2-88ad-f627372bb70a-config\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.094758 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-encryption-config\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.095089 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/877af0e5-70e7-49e5-8ed8-2073cfca18d5-proxy-tls\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.096303 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/773a03ba-4a88-45c8-99f2-3fcc582e31a0-images\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.096333 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097067 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8995c5d6-63a9-4be5-8186-f5b46f750cd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097095 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097640 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-client\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.098163 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5977cde5-6561-49a6-923d-74f32d8d74a2-default-certificate\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097774 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542909e2-f33d-43bc-802b-bfac545976d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097921 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0db570f-f1ef-4d89-a3b0-1773a1a42630-etcd-ca\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.097649 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.098523 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2jhk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.098723 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96899fff-2f84-46c2-88ad-f627372bb70a-serving-cert\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.099037 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.100424 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-989np"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.101838 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-54w2m"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.102930 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.103309 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.103473 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.103714 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.105926 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542909e2-f33d-43bc-802b-bfac545976d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.106732 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4xkk6"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.108394 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.110194 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.111267 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.112424 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.113514 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.114558 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.116204 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7d5nf"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.117641 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.117667 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-trz85"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.117804 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.119240 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.119517 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.119810 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d45gk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.120819 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.122194 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.123592 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-258fq"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.124755 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.125788 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.128683 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.128735 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7d5nf"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.129048 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bdfcp"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.132074 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.133389 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.136191 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.137414 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.139393 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.139530 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkrl5"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.142141 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-plfpk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.142545 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkrl5"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.142652 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.142889 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.143725 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-plfpk"] Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.165127 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.179538 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198795 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08928b4f-dcd5-4b90-837d-ba7f80007ba0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198821 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqlk\" (UniqueName: \"kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198843 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198867 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/624845e8-1d2d-4aad-91b3-df98d48df6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198885 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndsvv\" (UniqueName: \"kubernetes.io/projected/624845e8-1d2d-4aad-91b3-df98d48df6de-kube-api-access-ndsvv\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.198983 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199016 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xn7\" (UniqueName: \"kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199039 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199084 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f01849b3-05bc-4e0d-b130-7c9c426e7979-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199117 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-config\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199144 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szd9g\" (UniqueName: \"kubernetes.io/projected/4eb3f297-a2e8-4567-9953-8141e93ce37a-kube-api-access-szd9g\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199168 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pt2z\" (UniqueName: \"kubernetes.io/projected/f57af491-613c-4af2-9ae6-18ba05d35ca8-kube-api-access-5pt2z\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199190 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggrb\" (UniqueName: \"kubernetes.io/projected/91240e6f-a86e-405c-8182-bc4630e53033-kube-api-access-2ggrb\") pod \"migrator-59844c95c7-wsfrz\" (UID: \"91240e6f-a86e-405c-8182-bc4630e53033\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199224 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199246 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199265 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199286 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199312 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199334 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lskkx\" (UniqueName: \"kubernetes.io/projected/72d61e39-152a-497b-93ca-dec64d1a9849-kube-api-access-lskkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199354 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82032115-b907-44a7-a15a-50a3b4a89877-config\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199379 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb6jh\" (UniqueName: \"kubernetes.io/projected/f01849b3-05bc-4e0d-b130-7c9c426e7979-kube-api-access-lb6jh\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199400 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnch\" (UniqueName: \"kubernetes.io/projected/82032115-b907-44a7-a15a-50a3b4a89877-kube-api-access-qvnch\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199420 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwgxj\" (UniqueName: \"kubernetes.io/projected/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-kube-api-access-mwgxj\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199463 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b40efb06-b036-44e2-a2fb-0845c3cf455b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199495 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6jbc\" (UniqueName: \"kubernetes.io/projected/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-kube-api-access-v6jbc\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199533 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199579 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199602 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08928b4f-dcd5-4b90-837d-ba7f80007ba0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199624 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199649 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-cabundle\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199671 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvs9q\" (UniqueName: \"kubernetes.io/projected/02fd817e-134c-4479-8537-2b332057d2b7-kube-api-access-bvs9q\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199696 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcs5j\" (UniqueName: \"kubernetes.io/projected/cdacf66a-e67e-4d9b-b080-40c0910efda9-kube-api-access-rcs5j\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199723 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-trusted-ca\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199848 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnn4\" (UniqueName: \"kubernetes.io/projected/a8518827-9b43-4e92-8816-5e0af41bbfee-kube-api-access-znnn4\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.199880 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.201375 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smg8c\" (UniqueName: \"kubernetes.io/projected/bf3c10ff-5fef-4acd-a698-176b6eafac68-kube-api-access-smg8c\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.202801 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.203916 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.206346 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.206471 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213517 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f01849b3-05bc-4e0d-b130-7c9c426e7979-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213586 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-srv-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213631 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213672 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213712 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02fd817e-134c-4479-8537-2b332057d2b7-metrics-tls\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213743 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b40efb06-b036-44e2-a2fb-0845c3cf455b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213794 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213837 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213868 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjpt\" (UniqueName: \"kubernetes.io/projected/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-kube-api-access-chjpt\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213891 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213919 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb3f297-a2e8-4567-9953-8141e93ce37a-serving-cert\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213956 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.213986 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214032 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d61e39-152a-497b-93ca-dec64d1a9849-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214062 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214114 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmr8\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-kube-api-access-4vmr8\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214160 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbbc\" (UniqueName: \"kubernetes.io/projected/9ab9c597-592a-43bf-a3a1-f24bfabaab39-kube-api-access-vbbbc\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214207 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8518827-9b43-4e92-8816-5e0af41bbfee-tmpfs\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214243 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214278 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvvt\" (UniqueName: \"kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214309 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214352 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-key\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214422 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d61e39-152a-497b-93ca-dec64d1a9849-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214487 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc9b688-8bcc-404b-9688-cfcd405b8075-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214533 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/624845e8-1d2d-4aad-91b3-df98d48df6de-proxy-tls\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214564 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214595 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82032115-b907-44a7-a15a-50a3b4a89877-serving-cert\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.214632 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.215170 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.215719 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.216021 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8518827-9b43-4e92-8816-5e0af41bbfee-tmpfs\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.216560 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.218069 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.219803 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.229045 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb3f297-a2e8-4567-9953-8141e93ce37a-serving-cert\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.239869 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.259615 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.270396 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-config\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.289162 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.292738 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4eb3f297-a2e8-4567-9953-8141e93ce37a-trusted-ca\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.299647 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316271 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvs9q\" (UniqueName: \"kubernetes.io/projected/02fd817e-134c-4479-8537-2b332057d2b7-kube-api-access-bvs9q\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316466 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smg8c\" (UniqueName: \"kubernetes.io/projected/bf3c10ff-5fef-4acd-a698-176b6eafac68-kube-api-access-smg8c\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316566 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02fd817e-134c-4479-8537-2b332057d2b7-metrics-tls\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316596 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b40efb06-b036-44e2-a2fb-0845c3cf455b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316665 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316693 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316758 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vmr8\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-kube-api-access-4vmr8\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316822 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316933 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/624845e8-1d2d-4aad-91b3-df98d48df6de-proxy-tls\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.316974 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82032115-b907-44a7-a15a-50a3b4a89877-serving-cert\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317061 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/624845e8-1d2d-4aad-91b3-df98d48df6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317093 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndsvv\" (UniqueName: \"kubernetes.io/projected/624845e8-1d2d-4aad-91b3-df98d48df6de-kube-api-access-ndsvv\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317141 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xn7\" (UniqueName: \"kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317213 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggrb\" (UniqueName: \"kubernetes.io/projected/91240e6f-a86e-405c-8182-bc4630e53033-kube-api-access-2ggrb\") pod \"migrator-59844c95c7-wsfrz\" (UID: \"91240e6f-a86e-405c-8182-bc4630e53033\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317276 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317323 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82032115-b907-44a7-a15a-50a3b4a89877-config\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317371 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnch\" (UniqueName: \"kubernetes.io/projected/82032115-b907-44a7-a15a-50a3b4a89877-kube-api-access-qvnch\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.317405 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b40efb06-b036-44e2-a2fb-0845c3cf455b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.318433 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/624845e8-1d2d-4aad-91b3-df98d48df6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.328464 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.338119 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b40efb06-b036-44e2-a2fb-0845c3cf455b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.339767 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.359585 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.380274 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.390939 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b40efb06-b036-44e2-a2fb-0845c3cf455b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.399275 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.419400 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.432370 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f01849b3-05bc-4e0d-b130-7c9c426e7979-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.439741 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.446093 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f01849b3-05bc-4e0d-b130-7c9c426e7979-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.459834 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.499503 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.520186 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.531100 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d61e39-152a-497b-93ca-dec64d1a9849-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.541136 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.546737 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d61e39-152a-497b-93ca-dec64d1a9849-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.560482 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.572669 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/624845e8-1d2d-4aad-91b3-df98d48df6de-proxy-tls\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.580081 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.599242 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.611979 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82032115-b907-44a7-a15a-50a3b4a89877-serving-cert\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.620206 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.640008 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.660616 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.664953 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.680579 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.712430 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.719642 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.739414 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.748862 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82032115-b907-44a7-a15a-50a3b4a89877-config\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.759856 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.778873 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.798596 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.820181 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.840367 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.859341 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.880727 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.899954 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.919728 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.939110 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.960167 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.964207 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-cabundle\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:49 crc kubenswrapper[4764]: I0127 07:18:49.979665 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.000302 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.017379 4764 request.go:700] Waited for 1.005119517s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&limit=500&resourceVersion=0 Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.020188 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.040192 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.056105 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.059183 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.070064 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-signing-key\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.082613 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.100115 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.110672 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08928b4f-dcd5-4b90-837d-ba7f80007ba0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.119991 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.139062 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.152233 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-srv-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.159770 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.179346 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.198953 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.198993 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199073 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert podName:cdacf66a-e67e-4d9b-b080-40c0910efda9 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699045427 +0000 UTC m=+143.294667963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert") pod "catalog-operator-68c6474976-d8w4l" (UID: "cdacf66a-e67e-4d9b-b080-40c0910efda9") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199094 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert podName:a8518827-9b43-4e92-8816-5e0af41bbfee nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699085588 +0000 UTC m=+143.294708124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert") pod "packageserver-d55dfcdfc-6p9wc" (UID: "a8518827-9b43-4e92-8816-5e0af41bbfee") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199145 4764 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199219 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume podName:b4c388c4-2071-4b3b-97b6-52aec664b967 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699190811 +0000 UTC m=+143.294813387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume") pod "collect-profiles-29491635-pg522" (UID: "b4c388c4-2071-4b3b-97b6-52aec664b967") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199291 4764 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199389 4764 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199433 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs podName:f57af491-613c-4af2-9ae6-18ba05d35ca8 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699391356 +0000 UTC m=+143.295013922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs") pod "machine-config-server-znvps" (UID: "f57af491-613c-4af2-9ae6-18ba05d35ca8") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.199509 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199521 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert podName:08928b4f-dcd5-4b90-837d-ba7f80007ba0 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699499859 +0000 UTC m=+143.295122425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" (UID: "08928b4f-dcd5-4b90-837d-ba7f80007ba0") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199674 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.199770 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert podName:9ab9c597-592a-43bf-a3a1-f24bfabaab39 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.699738126 +0000 UTC m=+143.295360692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert") pod "olm-operator-6b444d44fb-2f44n" (UID: "9ab9c597-592a-43bf-a3a1-f24bfabaab39") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.202997 4764 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.203078 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config podName:9dc9b688-8bcc-404b-9688-cfcd405b8075 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.703056217 +0000 UTC m=+143.298678793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config") pod "kube-controller-manager-operator-78b949d7b-m8bbk" (UID: "9dc9b688-8bcc-404b-9688-cfcd405b8075") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.214111 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02fd817e-134c-4479-8537-2b332057d2b7-metrics-tls\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.215781 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.215892 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert podName:cdacf66a-e67e-4d9b-b080-40c0910efda9 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.715859437 +0000 UTC m=+143.311482003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert") pod "catalog-operator-68c6474976-d8w4l" (UID: "cdacf66a-e67e-4d9b-b080-40c0910efda9") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217775 4764 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217832 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token podName:f57af491-613c-4af2-9ae6-18ba05d35ca8 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.717819771 +0000 UTC m=+143.313442307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token") pod "machine-config-server-znvps" (UID: "f57af491-613c-4af2-9ae6-18ba05d35ca8") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217858 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217888 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume podName:b4c388c4-2071-4b3b-97b6-52aec664b967 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.717878623 +0000 UTC m=+143.313501159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume") pod "collect-profiles-29491635-pg522" (UID: "b4c388c4-2071-4b3b-97b6-52aec664b967") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217871 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217954 4764 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.217982 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert podName:9dc9b688-8bcc-404b-9688-cfcd405b8075 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.717974865 +0000 UTC m=+143.313597401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert") pod "kube-controller-manager-operator-78b949d7b-m8bbk" (UID: "9dc9b688-8bcc-404b-9688-cfcd405b8075") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.218013 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert podName:a8518827-9b43-4e92-8816-5e0af41bbfee nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.717988646 +0000 UTC m=+143.313611212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert") pod "packageserver-d55dfcdfc-6p9wc" (UID: "a8518827-9b43-4e92-8816-5e0af41bbfee") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.220671 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.239918 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.259863 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.279683 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.299017 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317502 4764 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317566 4764 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317641 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics podName:591d7bc8-2161-4f33-bf8d-38d89380509f nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.817596543 +0000 UTC m=+143.413219069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics") pod "marketplace-operator-79b997595-hmmqc" (UID: "591d7bc8-2161-4f33-bf8d-38d89380509f") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317652 4764 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317662 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert podName:bf3c10ff-5fef-4acd-a698-176b6eafac68 nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.817653835 +0000 UTC m=+143.413276371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-n9bt8" (UID: "bf3c10ff-5fef-4acd-a698-176b6eafac68") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: E0127 07:18:50.317718 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca podName:591d7bc8-2161-4f33-bf8d-38d89380509f nodeName:}" failed. No retries permitted until 2026-01-27 07:18:50.817700656 +0000 UTC m=+143.413323182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca") pod "marketplace-operator-79b997595-hmmqc" (UID: "591d7bc8-2161-4f33-bf8d-38d89380509f") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.320343 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.349282 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.360001 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.381259 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.400816 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.420101 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.440243 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.458902 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.479619 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.500120 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.518984 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.539836 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.560305 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.579753 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.599906 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.620719 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.640099 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.660494 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.703831 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjjj5\" (UniqueName: \"kubernetes.io/projected/faf1a2aa-14d3-4870-9886-3c0c989ed0e0-kube-api-access-bjjj5\") pod \"control-plane-machine-set-operator-78cbb6b69f-nknl2\" (UID: \"faf1a2aa-14d3-4870-9886-3c0c989ed0e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.744978 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745042 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745079 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745105 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745142 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745208 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745237 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745359 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745474 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745502 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745533 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.745568 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.748510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxc7\" (UniqueName: \"kubernetes.io/projected/e0db570f-f1ef-4d89-a3b0-1773a1a42630-kube-api-access-xqxc7\") pod \"etcd-operator-b45778765-fdndb\" (UID: \"e0db570f-f1ef-4d89-a3b0-1773a1a42630\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.749591 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-webhook-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.750098 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-profile-collector-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.750311 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9b688-8bcc-404b-9688-cfcd405b8075-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.750427 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.750456 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08928b4f-dcd5-4b90-837d-ba7f80007ba0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.751081 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9b688-8bcc-404b-9688-cfcd405b8075-config\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.752295 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.755102 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab9c597-592a-43bf-a3a1-f24bfabaab39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.755510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8518827-9b43-4e92-8816-5e0af41bbfee-apiservice-cert\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.755730 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-certs\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.755724 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cdacf66a-e67e-4d9b-b080-40c0910efda9-srv-cert\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.755958 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f57af491-613c-4af2-9ae6-18ba05d35ca8-node-bootstrap-token\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.758371 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzqh2\" (UniqueName: \"kubernetes.io/projected/96899fff-2f84-46c2-88ad-f627372bb70a-kube-api-access-nzqh2\") pod \"authentication-operator-69f744f599-54w2m\" (UID: \"96899fff-2f84-46c2-88ad-f627372bb70a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.787010 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/877af0e5-70e7-49e5-8ed8-2073cfca18d5-kube-api-access-mk54c\") pod \"machine-config-operator-74547568cd-ts96t\" (UID: \"877af0e5-70e7-49e5-8ed8-2073cfca18d5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.807692 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfln\" (UniqueName: \"kubernetes.io/projected/542909e2-f33d-43bc-802b-bfac545976d6-kube-api-access-jgfln\") pod \"openshift-apiserver-operator-796bbdcf4f-hd5t4\" (UID: \"542909e2-f33d-43bc-802b-bfac545976d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.810064 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.827446 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8k8m\" (UniqueName: \"kubernetes.io/projected/773a03ba-4a88-45c8-99f2-3fcc582e31a0-kube-api-access-v8k8m\") pod \"machine-api-operator-5694c8668f-4n9lw\" (UID: \"773a03ba-4a88-45c8-99f2-3fcc582e31a0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.841386 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvbk\" (UniqueName: \"kubernetes.io/projected/c0c0690a-4da4-49d6-9376-21d558d9df3c-kube-api-access-jxvbk\") pod \"apiserver-76f77b778f-drm8b\" (UID: \"c0c0690a-4da4-49d6-9376-21d558d9df3c\") " pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.843187 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.847949 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.848185 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.848260 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.850068 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.852337 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.853253 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3c10ff-5fef-4acd-a698-176b6eafac68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.860254 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6hkd\" (UniqueName: \"kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd\") pod \"route-controller-manager-6576b87f9c-ghtj6\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.875006 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.893179 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mg6d\" (UniqueName: \"kubernetes.io/projected/c0e06490-3f43-42d6-aa12-bf37bd56a1cd-kube-api-access-4mg6d\") pod \"apiserver-7bbb656c7d-xmc64\" (UID: \"c0e06490-3f43-42d6-aa12-bf37bd56a1cd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.920977 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4496d-64fd-4915-8041-de29bae3a018-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6w9jv\" (UID: \"e3f4496d-64fd-4915-8041-de29bae3a018\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.934142 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.941369 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-256nd\" (UniqueName: \"kubernetes.io/projected/5977cde5-6561-49a6-923d-74f32d8d74a2-kube-api-access-256nd\") pod \"router-default-5444994796-kpqkv\" (UID: \"5977cde5-6561-49a6-923d-74f32d8d74a2\") " pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.947593 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.970237 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdf25\" (UniqueName: \"kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25\") pod \"controller-manager-879f6c89f-nttjc\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.980207 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.981918 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdrn\" (UniqueName: \"kubernetes.io/projected/5fdc696b-b0c9-4452-80d5-e816379cf155-kube-api-access-wcdrn\") pod \"cluster-samples-operator-665b6dd947-tp9hs\" (UID: \"5fdc696b-b0c9-4452-80d5-e816379cf155\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.986708 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:18:50 crc kubenswrapper[4764]: I0127 07:18:50.988130 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.001317 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c6nn\" (UniqueName: \"kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn\") pod \"oauth-openshift-558db77b4-b2jhk\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.016807 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.019680 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkpsc\" (UniqueName: \"kubernetes.io/projected/8995c5d6-63a9-4be5-8186-f5b46f750cd2-kube-api-access-pkpsc\") pod \"ingress-operator-5b745b69d9-6gp4n\" (UID: \"8995c5d6-63a9-4be5-8186-f5b46f750cd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.037232 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.037310 4764 request.go:700] Waited for 1.940095244s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.038749 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.042797 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.044387 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jv9\" (UniqueName: \"kubernetes.io/projected/b866a424-f51d-42cc-9ac6-0656d94083b0-kube-api-access-w8jv9\") pod \"downloads-7954f5f757-989np\" (UID: \"b866a424-f51d-42cc-9ac6-0656d94083b0\") " pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.059669 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtxm\" (UniqueName: \"kubernetes.io/projected/a8ec2dea-a9fd-4661-847d-2b367f2b2ebc-kube-api-access-lwtxm\") pod \"machine-approver-56656f9798-44vzr\" (UID: \"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.059824 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.079638 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.089185 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.096520 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fdndb"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.101888 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.104048 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.119524 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.121497 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.140814 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:18:51 crc kubenswrapper[4764]: W0127 07:18:51.146239 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0db570f_f1ef_4d89_a3b0_1773a1a42630.slice/crio-e7e516f94af3c9e5a0cb4ecc4bbe91745ae1da2b06f8e60ea9d3b2225d913d55 WatchSource:0}: Error finding container e7e516f94af3c9e5a0cb4ecc4bbe91745ae1da2b06f8e60ea9d3b2225d913d55: Status 404 returned error can't find the container with id e7e516f94af3c9e5a0cb4ecc4bbe91745ae1da2b06f8e60ea9d3b2225d913d55 Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.160910 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.179197 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.179216 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:51 crc kubenswrapper[4764]: W0127 07:18:51.185698 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ec2dea_a9fd_4661_847d_2b367f2b2ebc.slice/crio-2494dfea3c698d13543c4b2ff79c39fba6ec6f6803c333dd67542151bdda084e WatchSource:0}: Error finding container 2494dfea3c698d13543c4b2ff79c39fba6ec6f6803c333dd67542151bdda084e: Status 404 returned error can't find the container with id 2494dfea3c698d13543c4b2ff79c39fba6ec6f6803c333dd67542151bdda084e Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.206243 4764 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.209530 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.223744 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.242639 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.260811 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.260993 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.261610 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.270700 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.277572 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqlk\" (UniqueName: \"kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk\") pod \"collect-profiles-29491635-pg522\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.306418 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szd9g\" (UniqueName: \"kubernetes.io/projected/4eb3f297-a2e8-4567-9953-8141e93ce37a-kube-api-access-szd9g\") pod \"console-operator-58897d9998-trz85\" (UID: \"4eb3f297-a2e8-4567-9953-8141e93ce37a\") " pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.314474 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.321144 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pt2z\" (UniqueName: \"kubernetes.io/projected/f57af491-613c-4af2-9ae6-18ba05d35ca8-kube-api-access-5pt2z\") pod \"machine-config-server-znvps\" (UID: \"f57af491-613c-4af2-9ae6-18ba05d35ca8\") " pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.325317 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kpqkv" event={"ID":"5977cde5-6561-49a6-923d-74f32d8d74a2","Type":"ContainerStarted","Data":"b10f09c168934b82f781a84232821507b188cc4c78439fa601707c3e460581de"} Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.328524 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" event={"ID":"e0db570f-f1ef-4d89-a3b0-1773a1a42630","Type":"ContainerStarted","Data":"e7e516f94af3c9e5a0cb4ecc4bbe91745ae1da2b06f8e60ea9d3b2225d913d55"} Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.340638 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb6jh\" (UniqueName: \"kubernetes.io/projected/f01849b3-05bc-4e0d-b130-7c9c426e7979-kube-api-access-lb6jh\") pod \"openshift-controller-manager-operator-756b6f6bc6-6w74k\" (UID: \"f01849b3-05bc-4e0d-b130-7c9c426e7979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.363598 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskkx\" (UniqueName: \"kubernetes.io/projected/72d61e39-152a-497b-93ca-dec64d1a9849-kube-api-access-lskkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2vg4\" (UID: \"72d61e39-152a-497b-93ca-dec64d1a9849\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.383613 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwgxj\" (UniqueName: \"kubernetes.io/projected/67b6b877-b1b6-4e34-9e7d-560cddc3e4ed-kube-api-access-mwgxj\") pod \"openshift-config-operator-7777fb866f-hsrb8\" (UID: \"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.399467 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" event={"ID":"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc","Type":"ContainerStarted","Data":"2494dfea3c698d13543c4b2ff79c39fba6ec6f6803c333dd67542151bdda084e"} Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.404772 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" event={"ID":"877af0e5-70e7-49e5-8ed8-2073cfca18d5","Type":"ContainerStarted","Data":"bfb55eaeadd785fb50c9f804293fb25d89a219f437dac45b9e9a9b9de1b4156b"} Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.405973 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcs5j\" (UniqueName: \"kubernetes.io/projected/cdacf66a-e67e-4d9b-b080-40c0910efda9-kube-api-access-rcs5j\") pod \"catalog-operator-68c6474976-d8w4l\" (UID: \"cdacf66a-e67e-4d9b-b080-40c0910efda9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.416316 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" event={"ID":"542909e2-f33d-43bc-802b-bfac545976d6","Type":"ContainerStarted","Data":"b925ee099e7bbadf0336fd364950e88510dd5dff9e4b425a1a065c2fb25ff7a6"} Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.422602 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6jbc\" (UniqueName: \"kubernetes.io/projected/baeb63d9-2b68-4047-9fbd-ba4c05f872d9-kube-api-access-v6jbc\") pod \"multus-admission-controller-857f4d67dd-4xkk6\" (UID: \"baeb63d9-2b68-4047-9fbd-ba4c05f872d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.435142 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnn4\" (UniqueName: \"kubernetes.io/projected/a8518827-9b43-4e92-8816-5e0af41bbfee-kube-api-access-znnn4\") pod \"packageserver-d55dfcdfc-6p9wc\" (UID: \"a8518827-9b43-4e92-8816-5e0af41bbfee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.459619 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08928b4f-dcd5-4b90-837d-ba7f80007ba0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qs5v\" (UID: \"08928b4f-dcd5-4b90-837d-ba7f80007ba0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.464082 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.474289 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.480872 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc9b688-8bcc-404b-9688-cfcd405b8075-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m8bbk\" (UID: \"9dc9b688-8bcc-404b-9688-cfcd405b8075\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.482065 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.499120 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.501232 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvvt\" (UniqueName: \"kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt\") pod \"console-f9d7485db-dk6gm\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.516611 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znvps" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.523761 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbbc\" (UniqueName: \"kubernetes.io/projected/9ab9c597-592a-43bf-a3a1-f24bfabaab39-kube-api-access-vbbbc\") pod \"olm-operator-6b444d44fb-2f44n\" (UID: \"9ab9c597-592a-43bf-a3a1-f24bfabaab39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.541201 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjpt\" (UniqueName: \"kubernetes.io/projected/a13966a5-d594-49f0-9e9d-f3b2cdc2e235-kube-api-access-chjpt\") pod \"service-ca-9c57cc56f-bdfcp\" (UID: \"a13966a5-d594-49f0-9e9d-f3b2cdc2e235\") " pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.562333 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvs9q\" (UniqueName: \"kubernetes.io/projected/02fd817e-134c-4479-8537-2b332057d2b7-kube-api-access-bvs9q\") pod \"dns-operator-744455d44c-258fq\" (UID: \"02fd817e-134c-4479-8537-2b332057d2b7\") " pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.577342 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.584258 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.587694 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smg8c\" (UniqueName: \"kubernetes.io/projected/bf3c10ff-5fef-4acd-a698-176b6eafac68-kube-api-access-smg8c\") pod \"package-server-manager-789f6589d5-n9bt8\" (UID: \"bf3c10ff-5fef-4acd-a698-176b6eafac68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.606235 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.606702 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.632123 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.639069 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.640289 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xn7\" (UniqueName: \"kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7\") pod \"marketplace-operator-79b997595-hmmqc\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.663675 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vmr8\" (UniqueName: \"kubernetes.io/projected/b40efb06-b036-44e2-a2fb-0845c3cf455b-kube-api-access-4vmr8\") pod \"cluster-image-registry-operator-dc59b4c8b-495p2\" (UID: \"b40efb06-b036-44e2-a2fb-0845c3cf455b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.669865 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.672839 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.704323 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-54w2m"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.708044 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.708474 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.708689 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.712299 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.720328 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-drm8b"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.724046 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.730933 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4n9lw"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.739663 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.740870 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndsvv\" (UniqueName: \"kubernetes.io/projected/624845e8-1d2d-4aad-91b3-df98d48df6de-kube-api-access-ndsvv\") pod \"machine-config-controller-84d6567774-kkwh8\" (UID: \"624845e8-1d2d-4aad-91b3-df98d48df6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.757891 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.759303 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggrb\" (UniqueName: \"kubernetes.io/projected/91240e6f-a86e-405c-8182-bc4630e53033-kube-api-access-2ggrb\") pod \"migrator-59844c95c7-wsfrz\" (UID: \"91240e6f-a86e-405c-8182-bc4630e53033\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.760501 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnch\" (UniqueName: \"kubernetes.io/projected/82032115-b907-44a7-a15a-50a3b4a89877-kube-api-access-qvnch\") pod \"service-ca-operator-777779d784-d45gk\" (UID: \"82032115-b907-44a7-a15a-50a3b4a89877\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.774516 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.774723 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.774862 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.774958 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.775079 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6n7l\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.775194 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.775292 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.775421 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: E0127 07:18:51.776200 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.276177601 +0000 UTC m=+144.871800297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.795398 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.805272 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.877508 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.877946 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl76c\" (UniqueName: \"kubernetes.io/projected/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-kube-api-access-tl76c\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.877982 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878392 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-mountpoint-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878626 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6n7l\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878737 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878763 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-plugins-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878810 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-cert\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878893 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrn4\" (UniqueName: \"kubernetes.io/projected/69222b99-625f-4824-82d0-82c181574456-kube-api-access-jzrn4\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.878963 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: E0127 07:18:51.879116 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.379082199 +0000 UTC m=+144.974704725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.879226 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69222b99-625f-4824-82d0-82c181574456-config-volume\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880566 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8sz\" (UniqueName: \"kubernetes.io/projected/c6167ceb-06dc-4253-892a-99f1c1feffb9-kube-api-access-ns8sz\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880621 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69222b99-625f-4824-82d0-82c181574456-metrics-tls\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880735 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-registration-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880788 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880853 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-socket-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880928 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.880992 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.881226 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.881271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-csi-data-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.881340 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.881987 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.895362 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n"] Jan 27 07:18:51 crc kubenswrapper[4764]: E0127 07:18:51.899040 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.399018115 +0000 UTC m=+144.994640641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.899600 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.900676 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-989np"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.907110 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.914778 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.923563 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.929927 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6n7l\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: W0127 07:18:51.930091 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf1a2aa_14d3_4870_9886_3c0c989ed0e0.slice/crio-0dacd72c916ee13f86f963f80dc932a5b486bf18b847cba5309c3339af664ae7 WatchSource:0}: Error finding container 0dacd72c916ee13f86f963f80dc932a5b486bf18b847cba5309c3339af664ae7: Status 404 returned error can't find the container with id 0dacd72c916ee13f86f963f80dc932a5b486bf18b847cba5309c3339af664ae7 Jan 27 07:18:51 crc kubenswrapper[4764]: W0127 07:18:51.932788 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96899fff_2f84_46c2_88ad_f627372bb70a.slice/crio-e73216bde098a7389c64bd9b9a7c2d63f9d91813ed0f355f6c678ddf9bcc6062 WatchSource:0}: Error finding container e73216bde098a7389c64bd9b9a7c2d63f9d91813ed0f355f6c678ddf9bcc6062: Status 404 returned error can't find the container with id e73216bde098a7389c64bd9b9a7c2d63f9d91813ed0f355f6c678ddf9bcc6062 Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.940838 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l"] Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.941965 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.946514 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.957307 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.962286 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.982993 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:51 crc kubenswrapper[4764]: E0127 07:18:51.983205 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.4831703 +0000 UTC m=+145.078792816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.985992 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-plugins-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986036 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-cert\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986062 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzrn4\" (UniqueName: \"kubernetes.io/projected/69222b99-625f-4824-82d0-82c181574456-kube-api-access-jzrn4\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986114 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69222b99-625f-4824-82d0-82c181574456-config-volume\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986165 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns8sz\" (UniqueName: \"kubernetes.io/projected/c6167ceb-06dc-4253-892a-99f1c1feffb9-kube-api-access-ns8sz\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986183 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69222b99-625f-4824-82d0-82c181574456-metrics-tls\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986216 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-registration-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986238 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-socket-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986275 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986344 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-csi-data-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986361 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl76c\" (UniqueName: \"kubernetes.io/projected/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-kube-api-access-tl76c\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986371 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-plugins-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986459 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-mountpoint-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986457 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-registration-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.986546 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-socket-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: E0127 07:18:51.986826 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.486811559 +0000 UTC m=+145.082434085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.987008 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-csi-data-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.987075 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c6167ceb-06dc-4253-892a-99f1c1feffb9-mountpoint-dir\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.987297 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69222b99-625f-4824-82d0-82c181574456-config-volume\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:51 crc kubenswrapper[4764]: I0127 07:18:51.996826 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69222b99-625f-4824-82d0-82c181574456-metrics-tls\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.006646 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-cert\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.055295 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzrn4\" (UniqueName: \"kubernetes.io/projected/69222b99-625f-4824-82d0-82c181574456-kube-api-access-jzrn4\") pod \"dns-default-7d5nf\" (UID: \"69222b99-625f-4824-82d0-82c181574456\") " pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.064385 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl76c\" (UniqueName: \"kubernetes.io/projected/2d1e2f1d-fe12-44be-9410-1cbab59b3f1d-kube-api-access-tl76c\") pod \"ingress-canary-plfpk\" (UID: \"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d\") " pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.077264 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns8sz\" (UniqueName: \"kubernetes.io/projected/c6167ceb-06dc-4253-892a-99f1c1feffb9-kube-api-access-ns8sz\") pod \"csi-hostpathplugin-hkrl5\" (UID: \"c6167ceb-06dc-4253-892a-99f1c1feffb9\") " pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.087844 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.088307 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.588255788 +0000 UTC m=+145.183878314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.129984 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.153036 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.167505 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-plfpk" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.189451 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.189857 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.68984132 +0000 UTC m=+145.285463846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.294393 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.295329 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.795309379 +0000 UTC m=+145.390931905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.296310 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-trz85"] Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.301730 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc"] Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.302231 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2jhk"] Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.413352 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.413980 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:52.913955108 +0000 UTC m=+145.509577634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.430996 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-989np" event={"ID":"b866a424-f51d-42cc-9ac6-0656d94083b0","Type":"ContainerStarted","Data":"3e689071dae59a6f0697e1ff36f389c85307da0a049383eb6792373a123b71e7"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.516191 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.516716 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.016690922 +0000 UTC m=+145.612313448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599005 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" event={"ID":"a7073743-ec8e-48d4-a853-f1b6e10343e4","Type":"ContainerStarted","Data":"805f7cb793f6944f8d15e5095b0692ff03d089be9d62a1ca1d0841bfd81faa41"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599063 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" event={"ID":"faf1a2aa-14d3-4870-9886-3c0c989ed0e0","Type":"ContainerStarted","Data":"0dacd72c916ee13f86f963f80dc932a5b486bf18b847cba5309c3339af664ae7"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599091 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" event={"ID":"e0db570f-f1ef-4d89-a3b0-1773a1a42630","Type":"ContainerStarted","Data":"fff00702fedb050f89e8c90912e09c60a8eda7fae19f58a51467b145a673858f"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599103 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" event={"ID":"773a03ba-4a88-45c8-99f2-3fcc582e31a0","Type":"ContainerStarted","Data":"2007c26c45b391fcca6f893a05dffb7246cc4d22992212933d850e4d12dd7ee4"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599116 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znvps" event={"ID":"f57af491-613c-4af2-9ae6-18ba05d35ca8","Type":"ContainerStarted","Data":"312878e4383af19af7f9ed08db9b0e2594f46719fcbd0b6d408db893076b5ea2"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599126 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" event={"ID":"c0e06490-3f43-42d6-aa12-bf37bd56a1cd","Type":"ContainerStarted","Data":"65c8c1606c97baed0ea641739621f157b179b882d13ac195d88ae75a17d82597"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599137 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" event={"ID":"cdacf66a-e67e-4d9b-b080-40c0910efda9","Type":"ContainerStarted","Data":"845105b21f4cc9a729e665d84ac72624cf3bdb2e3747af08a382407a2247b8ea"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599162 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" event={"ID":"5fdc696b-b0c9-4452-80d5-e816379cf155","Type":"ContainerStarted","Data":"f063be24293921103dd923794f502cfbb454c823f2c190100253b69c78f7d8c3"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599175 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kpqkv" event={"ID":"5977cde5-6561-49a6-923d-74f32d8d74a2","Type":"ContainerStarted","Data":"7706b1e66bd7a088a09a2a3542475b45b1fd0e5e5649a880dcefe2ca77903bdc"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599185 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" event={"ID":"877af0e5-70e7-49e5-8ed8-2073cfca18d5","Type":"ContainerStarted","Data":"637b15e07e5ed2b2de29137d9df8cfe01edf12487f2c6e8d4d0be13874b5a2c0"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.599212 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" event={"ID":"96899fff-2f84-46c2-88ad-f627372bb70a","Type":"ContainerStarted","Data":"e73216bde098a7389c64bd9b9a7c2d63f9d91813ed0f355f6c678ddf9bcc6062"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.625349 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.628178 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.128152694 +0000 UTC m=+145.723775210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.665967 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522"] Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.695658 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk"] Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.713415 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" event={"ID":"c0c0690a-4da4-49d6-9376-21d558d9df3c","Type":"ContainerStarted","Data":"b961beb1c7ac8502ccb7381b2655da608bb1be5991fba94b6aa3a4044a279dcf"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.717157 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" event={"ID":"542909e2-f33d-43bc-802b-bfac545976d6","Type":"ContainerStarted","Data":"fc6c2d48b9ac7743de6309c260dbaf7702c9b411d9d3ce594125e73784eae77e"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.722072 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" event={"ID":"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc","Type":"ContainerStarted","Data":"2985af473432e6179edcb6bf4b6f7672ace73ddb37167759e316dc2b52874ce0"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.724757 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" event={"ID":"e3f4496d-64fd-4915-8041-de29bae3a018","Type":"ContainerStarted","Data":"060bf12a4cc5ed14a79aa0787b6e44baa4d9e23bd16c2f5d2af1fadf930f5dc7"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.724815 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" event={"ID":"e3f4496d-64fd-4915-8041-de29bae3a018","Type":"ContainerStarted","Data":"97114b2ca683e3e288d9258db2ff5f5a57ef27418336090bd24bb8dedd6aff61"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.726645 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.726801 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.226770155 +0000 UTC m=+145.822392681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.727066 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.727544 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.227531026 +0000 UTC m=+145.823153552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.731401 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" event={"ID":"2d754c80-9bb1-4cbe-8068-edb1bba00f87","Type":"ContainerStarted","Data":"c37f508c28e6b8e0541dcf0de3d267524203d2304459ea970b11fa3872a021fb"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.732905 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" event={"ID":"8995c5d6-63a9-4be5-8186-f5b46f750cd2","Type":"ContainerStarted","Data":"ae72b79d011953ba09fa445d1be3927e7f7a798466e2e5bd1d2044b860d6cb5b"} Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.828062 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.830741 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.330719062 +0000 UTC m=+145.926341588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.899990 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kpqkv" podStartSLOduration=123.899961659 podStartE2EDuration="2m3.899961659s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:52.878285825 +0000 UTC m=+145.473908351" watchObservedRunningTime="2026-01-27 07:18:52.899961659 +0000 UTC m=+145.495584195" Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.940187 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:52 crc kubenswrapper[4764]: E0127 07:18:52.941144 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.441125366 +0000 UTC m=+146.036747892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:52 crc kubenswrapper[4764]: I0127 07:18:52.995968 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.034163 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.042205 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.042587 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.542565463 +0000 UTC m=+146.138187989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.067676 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.092157 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.125004 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4xkk6"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.143943 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.144347 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.6443286 +0000 UTC m=+146.239951116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.182204 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.244950 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.245313 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.745293775 +0000 UTC m=+146.340916301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: W0127 07:18:53.319576 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d61e39_152a_497b_93ca_dec64d1a9849.slice/crio-c334e6309cafb1abdcfdbd0d7da44363bf54a4876ecf24331c99c2f604a18384 WatchSource:0}: Error finding container c334e6309cafb1abdcfdbd0d7da44363bf54a4876ecf24331c99c2f604a18384: Status 404 returned error can't find the container with id c334e6309cafb1abdcfdbd0d7da44363bf54a4876ecf24331c99c2f604a18384 Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.331989 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:53 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:53 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:53 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.332447 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.346784 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.347211 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.847196906 +0000 UTC m=+146.442819432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: W0127 07:18:53.359218 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf01849b3_05bc_4e0d_b130_7c9c426e7979.slice/crio-2b2242c454d8a24e9313c944eba213930adcd39ccaf7a2fc5ca336a4a315932c WatchSource:0}: Error finding container 2b2242c454d8a24e9313c944eba213930adcd39ccaf7a2fc5ca336a4a315932c: Status 404 returned error can't find the container with id 2b2242c454d8a24e9313c944eba213930adcd39ccaf7a2fc5ca336a4a315932c Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.360657 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fdndb" podStartSLOduration=124.360638914 podStartE2EDuration="2m4.360638914s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:53.359653357 +0000 UTC m=+145.955275883" watchObservedRunningTime="2026-01-27 07:18:53.360638914 +0000 UTC m=+145.956261440" Jan 27 07:18:53 crc kubenswrapper[4764]: W0127 07:18:53.419695 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591d7bc8_2161_4f33_bf8d_38d89380509f.slice/crio-170bd0ef0c85f53767cc066b98d4b7bd9086ba8a26c3b4915d6461e5bd63197a WatchSource:0}: Error finding container 170bd0ef0c85f53767cc066b98d4b7bd9086ba8a26c3b4915d6461e5bd63197a: Status 404 returned error can't find the container with id 170bd0ef0c85f53767cc066b98d4b7bd9086ba8a26c3b4915d6461e5bd63197a Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.456136 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.456468 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.956408067 +0000 UTC m=+146.552030593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.456513 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.457114 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:53.957093116 +0000 UTC m=+146.552715642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.558053 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.558269 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.058225806 +0000 UTC m=+146.653848332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.558497 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.558906 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.058890094 +0000 UTC m=+146.654512620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.567654 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hd5t4" podStartSLOduration=124.567627744 podStartE2EDuration="2m4.567627744s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:53.564419676 +0000 UTC m=+146.160042202" watchObservedRunningTime="2026-01-27 07:18:53.567627744 +0000 UTC m=+146.163250270" Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.569394 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.591009 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6w9jv" podStartSLOduration=124.590983773 podStartE2EDuration="2m4.590983773s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:53.590468349 +0000 UTC m=+146.186090875" watchObservedRunningTime="2026-01-27 07:18:53.590983773 +0000 UTC m=+146.186606299" Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.662003 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.664657 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.16462266 +0000 UTC m=+146.760245346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.719205 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bdfcp"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.744404 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.762644 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-258fq"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.764097 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.764165 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.765508 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.765889 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.265877463 +0000 UTC m=+146.861499989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.795083 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8"] Jan 27 07:18:53 crc kubenswrapper[4764]: W0127 07:18:53.849571 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda13966a5_d594_49f0_9e9d_f3b2cdc2e235.slice/crio-a5ebae061c192900a7f6d9a8bf2c954fb7d04951746ecf7b4270e9eae9472770 WatchSource:0}: Error finding container a5ebae061c192900a7f6d9a8bf2c954fb7d04951746ecf7b4270e9eae9472770: Status 404 returned error can't find the container with id a5ebae061c192900a7f6d9a8bf2c954fb7d04951746ecf7b4270e9eae9472770 Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.866803 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.867477 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.367433365 +0000 UTC m=+146.963055891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.869285 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.898065 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.948535 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.949768 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" event={"ID":"72d61e39-152a-497b-93ca-dec64d1a9849","Type":"ContainerStarted","Data":"c334e6309cafb1abdcfdbd0d7da44363bf54a4876ecf24331c99c2f604a18384"} Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.968765 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:53 crc kubenswrapper[4764]: E0127 07:18:53.969197 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.469180481 +0000 UTC m=+147.064803007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.976979 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znvps" event={"ID":"f57af491-613c-4af2-9ae6-18ba05d35ca8","Type":"ContainerStarted","Data":"77ee7eddfc5f3d935a6875126f8c5f35209724527d192ae3d2f0fbb9e05c207d"} Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.978573 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d45gk"] Jan 27 07:18:53 crc kubenswrapper[4764]: I0127 07:18:53.979900 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8"] Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.001428 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-plfpk"] Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.015811 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkrl5"] Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.016711 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" event={"ID":"b049cfac-c306-472f-ace1-bbbb32baf704","Type":"ContainerStarted","Data":"20141ffeb3a7e73cdb5b48ab2193faae3943161cd0b6d81971a7f52fa923e8c5"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.025495 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-znvps" podStartSLOduration=6.025469563 podStartE2EDuration="6.025469563s" podCreationTimestamp="2026-01-27 07:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.024114276 +0000 UTC m=+146.619736792" watchObservedRunningTime="2026-01-27 07:18:54.025469563 +0000 UTC m=+146.621092089" Jan 27 07:18:54 crc kubenswrapper[4764]: W0127 07:18:54.033893 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b6b877_b1b6_4e34_9e7d_560cddc3e4ed.slice/crio-45cc80865b4a56523d4ca90d47dfffdf9272b77fc8deaca6dcb73e76b026d057 WatchSource:0}: Error finding container 45cc80865b4a56523d4ca90d47dfffdf9272b77fc8deaca6dcb73e76b026d057: Status 404 returned error can't find the container with id 45cc80865b4a56523d4ca90d47dfffdf9272b77fc8deaca6dcb73e76b026d057 Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.036024 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" event={"ID":"a8518827-9b43-4e92-8816-5e0af41bbfee","Type":"ContainerStarted","Data":"bdecd22b1182cf728fbf731a6e8389ff2823aff1ec481a03906c39e1a522fcd3"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.053746 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" event={"ID":"2d754c80-9bb1-4cbe-8068-edb1bba00f87","Type":"ContainerStarted","Data":"92b624c7b38c8fcf45256fa12888b9f5b5f1e41862d9929a8f07a483bc6b5abe"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.054825 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.061185 4764 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nttjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.061257 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.070509 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.072349 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.572331236 +0000 UTC m=+147.167953762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.080608 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7d5nf"] Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.088473 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" podStartSLOduration=125.088450408 podStartE2EDuration="2m5.088450408s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.080823149 +0000 UTC m=+146.676445665" watchObservedRunningTime="2026-01-27 07:18:54.088450408 +0000 UTC m=+146.684072924" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.094849 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" event={"ID":"8995c5d6-63a9-4be5-8186-f5b46f750cd2","Type":"ContainerStarted","Data":"1e28482b48644f245d434a21e5abae2c15d98ce9a95789fe3a1fc79af20e6907"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.106125 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" event={"ID":"96899fff-2f84-46c2-88ad-f627372bb70a","Type":"ContainerStarted","Data":"18d4fa1fe7397106f78ae61059477afc816f222684ab6204869b0db505cd785e"} Jan 27 07:18:54 crc kubenswrapper[4764]: W0127 07:18:54.113517 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82032115_b907_44a7_a15a_50a3b4a89877.slice/crio-5c61f052730f92152ea7cfc8e884c4a67d73e48138361b1377a301b73e5dfd05 WatchSource:0}: Error finding container 5c61f052730f92152ea7cfc8e884c4a67d73e48138361b1377a301b73e5dfd05: Status 404 returned error can't find the container with id 5c61f052730f92152ea7cfc8e884c4a67d73e48138361b1377a301b73e5dfd05 Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.133847 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-54w2m" podStartSLOduration=125.13382632 podStartE2EDuration="2m5.13382632s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.132840873 +0000 UTC m=+146.728463409" watchObservedRunningTime="2026-01-27 07:18:54.13382632 +0000 UTC m=+146.729448846" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.138956 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" event={"ID":"591d7bc8-2161-4f33-bf8d-38d89380509f","Type":"ContainerStarted","Data":"170bd0ef0c85f53767cc066b98d4b7bd9086ba8a26c3b4915d6461e5bd63197a"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.152970 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" event={"ID":"5fdc696b-b0c9-4452-80d5-e816379cf155","Type":"ContainerStarted","Data":"15063181706c49328bfbf11fb89d01de2712ebdccca6dd5bfd1309f45b45204f"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.171946 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.174866 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.674847414 +0000 UTC m=+147.270469940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.181104 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" event={"ID":"baeb63d9-2b68-4047-9fbd-ba4c05f872d9","Type":"ContainerStarted","Data":"7416f600c19fde9a5a2f8d966e69803c350748881f6628f9c3f04b3962a84dd7"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.184595 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:54 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:54 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:54 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.184643 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.197390 4764 generic.go:334] "Generic (PLEG): container finished" podID="c0e06490-3f43-42d6-aa12-bf37bd56a1cd" containerID="82906f335a660043df2f29e0a39552f8ca20b80b817c8f420d8b51368699b5af" exitCode=0 Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.197699 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" event={"ID":"c0e06490-3f43-42d6-aa12-bf37bd56a1cd","Type":"ContainerDied","Data":"82906f335a660043df2f29e0a39552f8ca20b80b817c8f420d8b51368699b5af"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.229770 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dk6gm" event={"ID":"39f8297e-b534-44ff-9b38-4eb269960b80","Type":"ContainerStarted","Data":"c541c00ec476bb2c461065ee95d618c080031bba57e04fa8e9f659191a9615a4"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.252882 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-989np" event={"ID":"b866a424-f51d-42cc-9ac6-0656d94083b0","Type":"ContainerStarted","Data":"6ffea0f29ab917625d0093f221a1aa1ffb2f99eb89c51940786655ca8d81fa6b"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.254075 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.255560 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" event={"ID":"a7073743-ec8e-48d4-a853-f1b6e10343e4","Type":"ContainerStarted","Data":"8fab584a70ff2b44452f248fab58eef47e52755b5dcd5be3f11b78385579558c"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.256566 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.261572 4764 patch_prober.go:28] interesting pod/downloads-7954f5f757-989np container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.261769 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-989np" podUID="b866a424-f51d-42cc-9ac6-0656d94083b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.272983 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.273878 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.773858976 +0000 UTC m=+147.369481502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.322864 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" event={"ID":"f01849b3-05bc-4e0d-b130-7c9c426e7979","Type":"ContainerStarted","Data":"2b2242c454d8a24e9313c944eba213930adcd39ccaf7a2fc5ca336a4a315932c"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.323096 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dk6gm" podStartSLOduration=125.323077094 podStartE2EDuration="2m5.323077094s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.269076375 +0000 UTC m=+146.864698911" watchObservedRunningTime="2026-01-27 07:18:54.323077094 +0000 UTC m=+146.918699620" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.323729 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-989np" podStartSLOduration=125.323715401 podStartE2EDuration="2m5.323715401s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.320954765 +0000 UTC m=+146.916577301" watchObservedRunningTime="2026-01-27 07:18:54.323715401 +0000 UTC m=+146.919337927" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.359120 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" event={"ID":"9dc9b688-8bcc-404b-9688-cfcd405b8075","Type":"ContainerStarted","Data":"8eaf4d00c87118caa8c733121fae8e4319c5bacbab078a04fdd3d13c1bc4da73"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.360621 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" podStartSLOduration=124.360598261 podStartE2EDuration="2m4.360598261s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.360329444 +0000 UTC m=+146.955951980" watchObservedRunningTime="2026-01-27 07:18:54.360598261 +0000 UTC m=+146.956220787" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.374338 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.376021 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.875998473 +0000 UTC m=+147.471620999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.395641 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" podStartSLOduration=125.39561767 podStartE2EDuration="2m5.39561767s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.394206852 +0000 UTC m=+146.989829378" watchObservedRunningTime="2026-01-27 07:18:54.39561767 +0000 UTC m=+146.991240196" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.406292 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" event={"ID":"877af0e5-70e7-49e5-8ed8-2073cfca18d5","Type":"ContainerStarted","Data":"6f808493b10c07a81db1ab5653dd122b994797790ce75185a6bcfd21759733bf"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.427777 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-trz85" event={"ID":"4eb3f297-a2e8-4567-9953-8141e93ce37a","Type":"ContainerStarted","Data":"d036e90543bcb6edb1fe0797da9156934063a691b0afe206c546b07691f315c5"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.428721 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.431948 4764 patch_prober.go:28] interesting pod/console-operator-58897d9998-trz85 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.432108 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-trz85" podUID="4eb3f297-a2e8-4567-9953-8141e93ce37a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.438547 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ts96t" podStartSLOduration=124.438518555 podStartE2EDuration="2m4.438518555s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.438497295 +0000 UTC m=+147.034119821" watchObservedRunningTime="2026-01-27 07:18:54.438518555 +0000 UTC m=+147.034141071" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.452863 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" event={"ID":"9ab9c597-592a-43bf-a3a1-f24bfabaab39","Type":"ContainerStarted","Data":"9925b02aad0a7e1441bf8855d7e62b374c5fd72878cf953746e481201c11494a"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.452921 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.454589 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" event={"ID":"773a03ba-4a88-45c8-99f2-3fcc582e31a0","Type":"ContainerStarted","Data":"edbb38ba99dfa4b46079653b764cfa859ff6060e5da3501f837d252abc24b150"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.459677 4764 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2f44n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.459766 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" podUID="9ab9c597-592a-43bf-a3a1-f24bfabaab39" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.469941 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-trz85" podStartSLOduration=125.469913205 podStartE2EDuration="2m5.469913205s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.467453118 +0000 UTC m=+147.063075644" watchObservedRunningTime="2026-01-27 07:18:54.469913205 +0000 UTC m=+147.065535741" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.476801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" event={"ID":"b4c388c4-2071-4b3b-97b6-52aec664b967","Type":"ContainerStarted","Data":"3aa8a46b744498fb757c38151d946b9abaa625ce51e6f412476579abafe4759d"} Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.486012 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.488266 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.495425 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:54.995386303 +0000 UTC m=+147.591008829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.495698 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.503048 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.003020722 +0000 UTC m=+147.598643248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.509584 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" podStartSLOduration=124.509554521 podStartE2EDuration="2m4.509554521s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.508006228 +0000 UTC m=+147.103628774" watchObservedRunningTime="2026-01-27 07:18:54.509554521 +0000 UTC m=+147.105177047" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.597953 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.599897 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.099862734 +0000 UTC m=+147.695485520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.616846 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" podStartSLOduration=125.616816798 podStartE2EDuration="2m5.616816798s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:54.607899134 +0000 UTC m=+147.203521660" watchObservedRunningTime="2026-01-27 07:18:54.616816798 +0000 UTC m=+147.212439324" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.703177 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.703703 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.203688238 +0000 UTC m=+147.799310764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.805185 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.805633 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.305602339 +0000 UTC m=+147.901224865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.807722 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.307712857 +0000 UTC m=+147.903335383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.806103 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.913641 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.914358 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.414324787 +0000 UTC m=+148.009947313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:54 crc kubenswrapper[4764]: I0127 07:18:54.914420 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:54 crc kubenswrapper[4764]: E0127 07:18:54.915050 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.415034966 +0000 UTC m=+148.010657492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.019089 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.019248 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.519215539 +0000 UTC m=+148.114838065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.021197 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.021653 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.521639326 +0000 UTC m=+148.117261852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.123314 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.123535 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.623504296 +0000 UTC m=+148.219126822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.123740 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.124149 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.624141963 +0000 UTC m=+148.219764489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.200726 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:55 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:55 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:55 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.201273 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.226832 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.227334 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.727314139 +0000 UTC m=+148.322936665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.329173 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.829157768 +0000 UTC m=+148.424780294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.329575 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.430876 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.431220 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.931185632 +0000 UTC m=+148.526808158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.431590 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.431923 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:55.931902432 +0000 UTC m=+148.527524958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.515408 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" event={"ID":"9ab9c597-592a-43bf-a3a1-f24bfabaab39","Type":"ContainerStarted","Data":"53df11a89226a0afbcab031c34a0ea5c50d7de1130cac48f4641b2dde06d109a"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.538295 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.538652 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.038634135 +0000 UTC m=+148.634256651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.544908 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f44n" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.545605 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" event={"ID":"72d61e39-152a-497b-93ca-dec64d1a9849","Type":"ContainerStarted","Data":"25c7d47267ab7b51ed64331716635a909faf4ae6c86a26ebb3cf8da132973cfe"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.549083 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" event={"ID":"591d7bc8-2161-4f33-bf8d-38d89380509f","Type":"ContainerStarted","Data":"4b476375c41debd026d76b0b2a60f660f2e3b2ae841c7cab10a3a25328f639a4"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.549814 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.555715 4764 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmmqc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.555788 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.557603 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" event={"ID":"91240e6f-a86e-405c-8182-bc4630e53033","Type":"ContainerStarted","Data":"000329892a9a251455b6af073e909998b3483e56c8258d6b89e4e76f2c222a05"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.557659 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" event={"ID":"91240e6f-a86e-405c-8182-bc4630e53033","Type":"ContainerStarted","Data":"cd13c61329dff2e8561a79908689ef5d2bf608fee9b894e90dd7380127f260c9"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.557677 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" event={"ID":"91240e6f-a86e-405c-8182-bc4630e53033","Type":"ContainerStarted","Data":"4d1b95b9323b9c92e5bb65c762e6f4ae1ef722966f6c20906f04b56bda2c2196"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.573232 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" event={"ID":"02fd817e-134c-4479-8537-2b332057d2b7","Type":"ContainerStarted","Data":"6983c70f1f782484bb35fd4765bc1e04a09ef652828ad4a26eabb52ec7702603"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.573291 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" event={"ID":"02fd817e-134c-4479-8537-2b332057d2b7","Type":"ContainerStarted","Data":"9396cee8d509e18a7f4a5e555e65a58a1edfea2f4b2c94b639ec5484ddb4a312"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.595611 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" event={"ID":"8995c5d6-63a9-4be5-8186-f5b46f750cd2","Type":"ContainerStarted","Data":"9b1be79f6ea38afee556fe9d0ac24b754aaa79f22e1314c944b6242d0029f231"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.598263 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" event={"ID":"b4c388c4-2071-4b3b-97b6-52aec664b967","Type":"ContainerStarted","Data":"e9e7949a9102249840104a22988ccdbafd29b2a1dda0d3d687a8a7ba89386e89"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.600280 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" event={"ID":"82032115-b907-44a7-a15a-50a3b4a89877","Type":"ContainerStarted","Data":"ffaa78b7f681eb50cd0626ac1f10907b24bd85565535d153ab3d9fa975354359"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.600308 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" event={"ID":"82032115-b907-44a7-a15a-50a3b4a89877","Type":"ContainerStarted","Data":"5c61f052730f92152ea7cfc8e884c4a67d73e48138361b1377a301b73e5dfd05"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.619483 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" event={"ID":"624845e8-1d2d-4aad-91b3-df98d48df6de","Type":"ContainerStarted","Data":"4107813fd1b645492ac321113e49a446cbf6b02e1bc795b7d34b07c4df577029"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.619547 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" event={"ID":"624845e8-1d2d-4aad-91b3-df98d48df6de","Type":"ContainerStarted","Data":"dc0d7f1979d211ffc404b181340958c156b7ac247b80f79068601190148232e2"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.621163 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wsfrz" podStartSLOduration=126.621134844 podStartE2EDuration="2m6.621134844s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.619627133 +0000 UTC m=+148.215249669" watchObservedRunningTime="2026-01-27 07:18:55.621134844 +0000 UTC m=+148.216757370" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.632760 4764 generic.go:334] "Generic (PLEG): container finished" podID="c0c0690a-4da4-49d6-9376-21d558d9df3c" containerID="a2d4e25f40dfe6b3682aa33302441c8abf5b9904b3746a942de327c0a65eb83e" exitCode=0 Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.633689 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" event={"ID":"c0c0690a-4da4-49d6-9376-21d558d9df3c","Type":"ContainerDied","Data":"a2d4e25f40dfe6b3682aa33302441c8abf5b9904b3746a942de327c0a65eb83e"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.635855 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" event={"ID":"f01849b3-05bc-4e0d-b130-7c9c426e7979","Type":"ContainerStarted","Data":"94a5fe12a697b699d6b781e51b0662cf37cfd14451e31890e68b5bbc8b2d7f7d"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.639967 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" event={"ID":"cdacf66a-e67e-4d9b-b080-40c0910efda9","Type":"ContainerStarted","Data":"d769d94564924955502f0f5532cd3c5f4414f5855ccdbf4f61ead178e1cafccb"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.640981 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.642841 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.644280 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.144262258 +0000 UTC m=+148.739884784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.645880 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" event={"ID":"b049cfac-c306-472f-ace1-bbbb32baf704","Type":"ContainerStarted","Data":"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.650300 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.651864 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2vg4" podStartSLOduration=126.651840325 podStartE2EDuration="2m6.651840325s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.650716515 +0000 UTC m=+148.246339051" watchObservedRunningTime="2026-01-27 07:18:55.651840325 +0000 UTC m=+148.247462851" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.683989 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-plfpk" event={"ID":"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d","Type":"ContainerStarted","Data":"8fa881f8f5e0780982fa60a99fb01c0b5e37c5b5e4619dd8fafa60e817abe6e3"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.684064 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-plfpk" event={"ID":"2d1e2f1d-fe12-44be-9410-1cbab59b3f1d","Type":"ContainerStarted","Data":"c01f8f4ab6b401c3acfe04e8c28d95f20b8a54b512c46dd8739337b3ec642906"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.682967 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" podStartSLOduration=125.682104524 podStartE2EDuration="2m5.682104524s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.681793846 +0000 UTC m=+148.277416372" watchObservedRunningTime="2026-01-27 07:18:55.682104524 +0000 UTC m=+148.277727050" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.688853 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dk6gm" event={"ID":"39f8297e-b534-44ff-9b38-4eb269960b80","Type":"ContainerStarted","Data":"016ea56561d88ffec24316ace0edcd8e04bd6e4d057b035e9f15b56fe14cd136"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.708127 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.712827 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6w74k" podStartSLOduration=126.712796145 podStartE2EDuration="2m6.712796145s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.701842035 +0000 UTC m=+148.297464571" watchObservedRunningTime="2026-01-27 07:18:55.712796145 +0000 UTC m=+148.308418671" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.713745 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" event={"ID":"faf1a2aa-14d3-4870-9886-3c0c989ed0e0","Type":"ContainerStarted","Data":"f4e25b583918d2023ed182c3bbec971d16cbd38ec2852e9eeeb41e050dc61b24"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.744148 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.746848 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.246110277 +0000 UTC m=+148.841732803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.751117 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d5nf" event={"ID":"69222b99-625f-4824-82d0-82c181574456","Type":"ContainerStarted","Data":"52dee4ff638fefc5983a112d0fc5dcc0821e00751f7b27bbe102f8c120b09e99"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.751162 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d5nf" event={"ID":"69222b99-625f-4824-82d0-82c181574456","Type":"ContainerStarted","Data":"a9434da388bb65cf6c5cb895887c562d47661bdb29f13ec262c85ad3844cd4d5"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.827091 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" event={"ID":"a8ec2dea-a9fd-4661-847d-2b367f2b2ebc","Type":"ContainerStarted","Data":"88e7f77045defb3d6c4388597e9a04c70f9e68502f41763b32e1243c6716162f"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.850193 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.851127 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.351109853 +0000 UTC m=+148.946732379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.860326 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6gp4n" podStartSLOduration=126.860303435 podStartE2EDuration="2m6.860303435s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.858032013 +0000 UTC m=+148.453654539" watchObservedRunningTime="2026-01-27 07:18:55.860303435 +0000 UTC m=+148.455925961" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.860528 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" podStartSLOduration=125.860524361 podStartE2EDuration="2m5.860524361s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:55.739041084 +0000 UTC m=+148.334663610" watchObservedRunningTime="2026-01-27 07:18:55.860524361 +0000 UTC m=+148.456146887" Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.887684 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" event={"ID":"baeb63d9-2b68-4047-9fbd-ba4c05f872d9","Type":"ContainerStarted","Data":"3fc85ed0ac7def86400da4d1a93f8fa74d47fa538d26bbe067a620b47f64bbd2"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.893931 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" event={"ID":"c6167ceb-06dc-4253-892a-99f1c1feffb9","Type":"ContainerStarted","Data":"1cdb551f229498079c6a939e373e841374897db3aaf329f4c1943d83a3f041c0"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.902821 4764 generic.go:334] "Generic (PLEG): container finished" podID="67b6b877-b1b6-4e34-9e7d-560cddc3e4ed" containerID="6f8ecbc693a4d15aee8ba3738e1c689e790b01f9b4a04f5b008efdd813f3daac" exitCode=0 Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.902902 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" event={"ID":"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed","Type":"ContainerDied","Data":"6f8ecbc693a4d15aee8ba3738e1c689e790b01f9b4a04f5b008efdd813f3daac"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.902933 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" event={"ID":"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed","Type":"ContainerStarted","Data":"45cc80865b4a56523d4ca90d47dfffdf9272b77fc8deaca6dcb73e76b026d057"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.934801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" event={"ID":"08928b4f-dcd5-4b90-837d-ba7f80007ba0","Type":"ContainerStarted","Data":"750f40762f72c5e8a6c52b734b730dad599195d17a7dbd67aace9a4c3e3f4191"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.952199 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:55 crc kubenswrapper[4764]: E0127 07:18:55.954087 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.454067803 +0000 UTC m=+149.049690329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.993139 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" event={"ID":"b40efb06-b036-44e2-a2fb-0845c3cf455b","Type":"ContainerStarted","Data":"315befe258ebc9cd2aaae7881e17757d54f8be6c4509e40eaf9dedf7ddf34c60"} Jan 27 07:18:55 crc kubenswrapper[4764]: I0127 07:18:55.993189 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" event={"ID":"b40efb06-b036-44e2-a2fb-0845c3cf455b","Type":"ContainerStarted","Data":"19a463d11315810892d3bfefecd5cbee34eae8c670074e21384ceb3c7c32a8df"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.040093 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.040169 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.041328 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-trz85" event={"ID":"4eb3f297-a2e8-4567-9953-8141e93ce37a","Type":"ContainerStarted","Data":"46ba1a8a011dd7bedf77e2be40d378352056a84a6bdd437b0fdfbe8655b20982"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.054839 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.056267 4764 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xmc64 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.056337 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.556313533 +0000 UTC m=+149.151936249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.056328 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" podUID="c0e06490-3f43-42d6-aa12-bf37bd56a1cd" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.076067 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" event={"ID":"bf3c10ff-5fef-4acd-a698-176b6eafac68","Type":"ContainerStarted","Data":"5bc65499de0637a9e58400ce6f0d3e550e05f3dd8b1f11e9986c30bd7a8031d2"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.076126 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" event={"ID":"bf3c10ff-5fef-4acd-a698-176b6eafac68","Type":"ContainerStarted","Data":"ea8a9cf5bc82a8f997e62d1ade0ad135301f374af9d2fc0bc548c965c6e5555c"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.076139 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" event={"ID":"bf3c10ff-5fef-4acd-a698-176b6eafac68","Type":"ContainerStarted","Data":"bd8e94ab2d0811d7ea0241b18cd55510795046c1309dfc2067f0851aa58cf47f"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.077931 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.117847 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" podStartSLOduration=127.117823828 podStartE2EDuration="2m7.117823828s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.114473666 +0000 UTC m=+148.710096192" watchObservedRunningTime="2026-01-27 07:18:56.117823828 +0000 UTC m=+148.713446354" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.118099 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m8bbk" event={"ID":"9dc9b688-8bcc-404b-9688-cfcd405b8075","Type":"ContainerStarted","Data":"801acb99794f7cfbb2bc3e0fb324f61576be10920ada522a7c6712fcbd36e1fb"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.150800 4764 csr.go:261] certificate signing request csr-qfkwk is approved, waiting to be issued Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.155709 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" event={"ID":"5fdc696b-b0c9-4452-80d5-e816379cf155","Type":"ContainerStarted","Data":"ffba45839fd6c21e582ba83e042198051c1fc10d954edbd71bb2225e4ccf5367"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.156009 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.157379 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.65735593 +0000 UTC m=+149.252978446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.171067 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d45gk" podStartSLOduration=126.171045675 podStartE2EDuration="2m6.171045675s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.16356664 +0000 UTC m=+148.759189166" watchObservedRunningTime="2026-01-27 07:18:56.171045675 +0000 UTC m=+148.766668191" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.179096 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" event={"ID":"773a03ba-4a88-45c8-99f2-3fcc582e31a0","Type":"ContainerStarted","Data":"21897f3deb76a0f84b4301b18a7681b76ed5a1a8a112cae1a27fc7a9b16f4f30"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.187917 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:56 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:56 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:56 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.188015 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.207051 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" event={"ID":"a8518827-9b43-4e92-8816-5e0af41bbfee","Type":"ContainerStarted","Data":"a8b5d142c9743bf1b1c36e60cf3034418de97757e060b726075de03ba1bae4c0"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.208340 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.230972 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" event={"ID":"a13966a5-d594-49f0-9e9d-f3b2cdc2e235","Type":"ContainerStarted","Data":"955d024981f758a3f7b82988f405669b8ee11ea024cdffe48515beefd385899e"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.231038 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" event={"ID":"a13966a5-d594-49f0-9e9d-f3b2cdc2e235","Type":"ContainerStarted","Data":"a5ebae061c192900a7f6d9a8bf2c954fb7d04951746ecf7b4270e9eae9472770"} Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.231911 4764 patch_prober.go:28] interesting pod/downloads-7954f5f757-989np container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.231978 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-989np" podUID="b866a424-f51d-42cc-9ac6-0656d94083b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.244971 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.245170 4764 csr.go:257] certificate signing request csr-qfkwk is issued Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.271511 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.275966 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.775943138 +0000 UTC m=+149.371565664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.336621 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d8w4l" podStartSLOduration=126.336594079 podStartE2EDuration="2m6.336594079s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.305046025 +0000 UTC m=+148.900668551" watchObservedRunningTime="2026-01-27 07:18:56.336594079 +0000 UTC m=+148.932216605" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.337581 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" podStartSLOduration=126.337576916 podStartE2EDuration="2m6.337576916s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.335459078 +0000 UTC m=+148.931081614" watchObservedRunningTime="2026-01-27 07:18:56.337576916 +0000 UTC m=+148.933199442" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.374088 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.375834 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.875796183 +0000 UTC m=+149.471418709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.378240 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" podStartSLOduration=126.37822668 podStartE2EDuration="2m6.37822668s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.378158348 +0000 UTC m=+148.973780874" watchObservedRunningTime="2026-01-27 07:18:56.37822668 +0000 UTC m=+148.973849206" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.442711 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bdfcp" podStartSLOduration=126.442679475 podStartE2EDuration="2m6.442679475s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.427839998 +0000 UTC m=+149.023462524" watchObservedRunningTime="2026-01-27 07:18:56.442679475 +0000 UTC m=+149.038302001" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.479995 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.480341 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:56.980326446 +0000 UTC m=+149.575948972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.488670 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nknl2" podStartSLOduration=127.488653804 podStartE2EDuration="2m7.488653804s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.482931527 +0000 UTC m=+149.078554053" watchObservedRunningTime="2026-01-27 07:18:56.488653804 +0000 UTC m=+149.084276330" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.530721 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tp9hs" podStartSLOduration=127.530698205 podStartE2EDuration="2m7.530698205s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.528528176 +0000 UTC m=+149.124150702" watchObservedRunningTime="2026-01-27 07:18:56.530698205 +0000 UTC m=+149.126320731" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.580839 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.581284 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.08126501 +0000 UTC m=+149.676887536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.624793 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-plfpk" podStartSLOduration=7.624771151 podStartE2EDuration="7.624771151s" podCreationTimestamp="2026-01-27 07:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.62438556 +0000 UTC m=+149.220008086" watchObservedRunningTime="2026-01-27 07:18:56.624771151 +0000 UTC m=+149.220393677" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.625177 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-495p2" podStartSLOduration=127.625173262 podStartE2EDuration="2m7.625173262s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.568059389 +0000 UTC m=+149.163681915" watchObservedRunningTime="2026-01-27 07:18:56.625173262 +0000 UTC m=+149.220795788" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.655526 4764 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b2jhk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.655605 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.682764 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.683270 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.183254723 +0000 UTC m=+149.778877249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.733282 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-44vzr" podStartSLOduration=128.733259202 podStartE2EDuration="2m8.733259202s" podCreationTimestamp="2026-01-27 07:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.680005354 +0000 UTC m=+149.275627880" watchObservedRunningTime="2026-01-27 07:18:56.733259202 +0000 UTC m=+149.328881728" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.776425 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6p9wc" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.801584 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.820405 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.320360408 +0000 UTC m=+149.915982934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.844756 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" podStartSLOduration=126.844733775 podStartE2EDuration="2m6.844733775s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.842168635 +0000 UTC m=+149.437791161" watchObservedRunningTime="2026-01-27 07:18:56.844733775 +0000 UTC m=+149.440356301" Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.903529 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:56 crc kubenswrapper[4764]: E0127 07:18:56.904037 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.404017059 +0000 UTC m=+149.999639585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:56 crc kubenswrapper[4764]: I0127 07:18:56.943205 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4n9lw" podStartSLOduration=127.943184721 podStartE2EDuration="2m7.943184721s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:56.88579318 +0000 UTC m=+149.481415706" watchObservedRunningTime="2026-01-27 07:18:56.943184721 +0000 UTC m=+149.538807247" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.002075 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" podStartSLOduration=128.002051304 podStartE2EDuration="2m8.002051304s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.00080643 +0000 UTC m=+149.596428956" watchObservedRunningTime="2026-01-27 07:18:57.002051304 +0000 UTC m=+149.597673830" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.007002 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.007586 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.507555174 +0000 UTC m=+150.103177700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.044606 4764 patch_prober.go:28] interesting pod/console-operator-58897d9998-trz85 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.044689 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-trz85" podUID="4eb3f297-a2e8-4567-9953-8141e93ce37a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.108776 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.109355 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.609338502 +0000 UTC m=+150.204961018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.126566 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" podStartSLOduration=127.126547713 podStartE2EDuration="2m7.126547713s" podCreationTimestamp="2026-01-27 07:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.078369414 +0000 UTC m=+149.673991940" watchObservedRunningTime="2026-01-27 07:18:57.126547713 +0000 UTC m=+149.722170239" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.190621 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:57 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:57 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:57 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.190708 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.211722 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.212174 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.712147688 +0000 UTC m=+150.307770214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.246324 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 07:13:56 +0000 UTC, rotation deadline is 2026-12-12 16:30:25.470988211 +0000 UTC Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.246381 4764 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7665h11m28.224610366s for next certificate rotation Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.262111 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kkwh8" event={"ID":"624845e8-1d2d-4aad-91b3-df98d48df6de","Type":"ContainerStarted","Data":"80747324fd1f9e74a0ed2bd39d1731061089c69a3b8cc06e9489b3a21f224383"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.269405 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4xkk6" event={"ID":"baeb63d9-2b68-4047-9fbd-ba4c05f872d9","Type":"ContainerStarted","Data":"800b2ce76bbd5d85dbaf1d987bc7ec50247edaba0f2850a3315261d9807c1104"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.272791 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" event={"ID":"02fd817e-134c-4479-8537-2b332057d2b7","Type":"ContainerStarted","Data":"e8e17cdba1a1ce0ac5a0a20bc902194efe2eee57af5111b6d02fc2fcccd1bf83"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.280588 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qs5v" event={"ID":"08928b4f-dcd5-4b90-837d-ba7f80007ba0","Type":"ContainerStarted","Data":"e3ff2a4ff127b5ff84693a8205614399a849f2693e30a1712324e1ae1f9d0af8"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.305843 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-258fq" podStartSLOduration=128.305825893 podStartE2EDuration="2m8.305825893s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.300762885 +0000 UTC m=+149.896385411" watchObservedRunningTime="2026-01-27 07:18:57.305825893 +0000 UTC m=+149.901448419" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.314126 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.314508 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.814492821 +0000 UTC m=+150.410115347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.323720 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" event={"ID":"c0c0690a-4da4-49d6-9376-21d558d9df3c","Type":"ContainerStarted","Data":"8958a20e1277a6dccd3b7160779e4980aed302f73bda70473482c02c59f013b8"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.323822 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" event={"ID":"c0c0690a-4da4-49d6-9376-21d558d9df3c","Type":"ContainerStarted","Data":"ef8d9e01758e164f6b5eb8dcbec01db4ef4ffcd8de76495f52238672c32ee017"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.330589 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d5nf" event={"ID":"69222b99-625f-4824-82d0-82c181574456","Type":"ContainerStarted","Data":"3ac45077f66dfb24a63c284e3bb4e1710e09fd0e1db07814ca7b9143015ccab4"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.331296 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7d5nf" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.334016 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" event={"ID":"c0e06490-3f43-42d6-aa12-bf37bd56a1cd","Type":"ContainerStarted","Data":"94a6af6ba1462fee237d5a3e10cf8952e2d88ce486b67ca3ef33fac3981d72f7"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.350886 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" event={"ID":"c6167ceb-06dc-4253-892a-99f1c1feffb9","Type":"ContainerStarted","Data":"6791c9d66783beae98a3f4afa4c22274893df6cea37909c5c27582e3a1d785cc"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.373152 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" event={"ID":"67b6b877-b1b6-4e34-9e7d-560cddc3e4ed","Type":"ContainerStarted","Data":"cf0fb9e5daf42de74772698afb85fbfca789d717d08993b5ad0b5769c3d919f4"} Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.374048 4764 patch_prober.go:28] interesting pod/downloads-7954f5f757-989np container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.374108 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-989np" podUID="b866a424-f51d-42cc-9ac6-0656d94083b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.381716 4764 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hmmqc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.381788 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.387846 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.389265 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.395804 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.405129 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" podStartSLOduration=128.405099672 podStartE2EDuration="2m8.405099672s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.394674467 +0000 UTC m=+149.990297003" watchObservedRunningTime="2026-01-27 07:18:57.405099672 +0000 UTC m=+150.000722198" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.417950 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.418161 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.418254 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.418342 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.418573 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.419639 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:57.91961794 +0000 UTC m=+150.515240466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.422140 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.441942 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" podStartSLOduration=128.441923641 podStartE2EDuration="2m8.441923641s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.440255695 +0000 UTC m=+150.035878221" watchObservedRunningTime="2026-01-27 07:18:57.441923641 +0000 UTC m=+150.037546167" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.459125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.464197 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.465706 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.478386 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.480381 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.508269 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.520365 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.520711 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.520865 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzdz\" (UniqueName: \"kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.520894 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.548930 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.048905061 +0000 UTC m=+150.644527587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.572159 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7d5nf" podStartSLOduration=9.572122637 podStartE2EDuration="9.572122637s" podCreationTimestamp="2026-01-27 07:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:18:57.569013902 +0000 UTC m=+150.164636428" watchObservedRunningTime="2026-01-27 07:18:57.572122637 +0000 UTC m=+150.167745163" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.624348 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.624850 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.624937 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzdz\" (UniqueName: \"kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.624961 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.625532 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.627832 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.627922 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.127899764 +0000 UTC m=+150.723522290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.674674 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.685920 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.687032 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.697792 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.724499 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzdz\" (UniqueName: \"kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz\") pod \"certified-operators-pblvj\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.729227 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.729345 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvbw\" (UniqueName: \"kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.729903 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.229885357 +0000 UTC m=+150.825507883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.729392 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.730221 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.737143 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.764508 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.834745 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.834922 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.834967 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.835012 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvbw\" (UniqueName: \"kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.835480 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.335458489 +0000 UTC m=+150.931081015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.835890 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.847403 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.867903 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.869267 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.883483 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-trz85" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.931906 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.937630 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.937718 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.937764 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.937798 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zbq\" (UniqueName: \"kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:57 crc kubenswrapper[4764]: E0127 07:18:57.938176 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.438158892 +0000 UTC m=+151.033781418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.953060 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvbw\" (UniqueName: \"kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw\") pod \"community-operators-nlsvp\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.966553 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:18:57 crc kubenswrapper[4764]: I0127 07:18:57.974526 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.000174 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.015789 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.016677 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040311 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040541 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040611 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040638 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2zbq\" (UniqueName: \"kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040665 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbm5x\" (UniqueName: \"kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040693 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.040726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.040860 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.540834204 +0000 UTC m=+151.136456730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.041300 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.041541 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.142888 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2zbq\" (UniqueName: \"kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq\") pod \"certified-operators-94qqk\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.143807 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbm5x\" (UniqueName: \"kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.143854 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.143901 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.143965 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.144486 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.644464932 +0000 UTC m=+151.240087468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.145324 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.145698 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.208852 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:58 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:58 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:58 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.208947 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.253806 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.254223 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.754204807 +0000 UTC m=+151.349827333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.262006 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbm5x\" (UniqueName: \"kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x\") pod \"community-operators-d5l9p\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.300856 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.356230 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.356763 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.856745096 +0000 UTC m=+151.452367622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: W0127 07:18:58.358845 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-69d74e106022a1c3a5a9802f21ee1c6aac220ec94154d842f24e1bf1ec633e6b WatchSource:0}: Error finding container 69d74e106022a1c3a5a9802f21ee1c6aac220ec94154d842f24e1bf1ec633e6b: Status 404 returned error can't find the container with id 69d74e106022a1c3a5a9802f21ee1c6aac220ec94154d842f24e1bf1ec633e6b Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.368810 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.374466 4764 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b2jhk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.374547 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.460267 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.460742 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:58.960721273 +0000 UTC m=+151.556343799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.564953 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"69d74e106022a1c3a5a9802f21ee1c6aac220ec94154d842f24e1bf1ec633e6b"} Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.566264 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.566660 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.066645884 +0000 UTC m=+151.662268410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.648434 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" event={"ID":"c6167ceb-06dc-4253-892a-99f1c1feffb9","Type":"ContainerStarted","Data":"0a1afbc15e6fe50e8808c562f96cf4477d9f0403c213badc408764be3b389911"} Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.668098 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.670841 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.170814947 +0000 UTC m=+151.766437473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.769757 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.779500 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.279461753 +0000 UTC m=+151.875084279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.872932 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.876994 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.376943003 +0000 UTC m=+151.972565529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.885044 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.890909 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.390869154 +0000 UTC m=+151.986491680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:58 crc kubenswrapper[4764]: I0127 07:18:58.992271 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:58 crc kubenswrapper[4764]: E0127 07:18:58.992671 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.492652642 +0000 UTC m=+152.088275168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.095491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.095897 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.595881419 +0000 UTC m=+152.191503945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.189290 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:18:59 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:18:59 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:18:59 crc kubenswrapper[4764]: healthz check failed Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.189767 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.202882 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.203230 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.703215689 +0000 UTC m=+152.298838215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.204216 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.306628 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.307064 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.807046672 +0000 UTC m=+152.402669198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.383047 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.384380 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.391086 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.410421 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.410875 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:18:59.910856245 +0000 UTC m=+152.506478771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.440556 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.460575 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.512408 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.512519 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.512550 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj44t\" (UniqueName: \"kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.512586 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.512967 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.012952982 +0000 UTC m=+152.608575508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.552654 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:18:59 crc kubenswrapper[4764]: W0127 07:18:59.596774 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa217f31_3e10_46b8_a8f0_7ebe2a663bf1.slice/crio-107efd72a335cb3273fa0e53add39bf7225c4780d22750c4cfa2abd50ea9de2f WatchSource:0}: Error finding container 107efd72a335cb3273fa0e53add39bf7225c4780d22750c4cfa2abd50ea9de2f: Status 404 returned error can't find the container with id 107efd72a335cb3273fa0e53add39bf7225c4780d22750c4cfa2abd50ea9de2f Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.614894 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.615220 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.615323 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.615350 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj44t\" (UniqueName: \"kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.615958 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.115936572 +0000 UTC m=+152.711559098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.616428 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.618754 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.694795 4764 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.698364 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj44t\" (UniqueName: \"kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t\") pod \"redhat-marketplace-mjcjk\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.721553 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.721904 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.221890834 +0000 UTC m=+152.817513360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.750995 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.752735 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerStarted","Data":"3197ffbf0d9be800f619cae7c34776f376fdc64e712d073d6be0b0945c3a63e3"} Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.770571 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerStarted","Data":"107efd72a335cb3273fa0e53add39bf7225c4780d22750c4cfa2abd50ea9de2f"} Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.810988 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerStarted","Data":"392f57df3c459c289292c5a2d80650adc5c1311b619637e5bb1d012c396c09f2"} Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.814521 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.815784 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.823319 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.842215 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.342169828 +0000 UTC m=+152.937792364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.843950 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.888034 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.938136 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.938254 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.938300 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xp7v\" (UniqueName: \"kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.938322 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:18:59 crc kubenswrapper[4764]: E0127 07:18:59.938811 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.438796085 +0000 UTC m=+153.034418611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.973102 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"31cdef20acd81059fc7c078ee86b7cbbf1f14d719e6ab1901cd6852e41bdcf29"} Jan 27 07:18:59 crc kubenswrapper[4764]: I0127 07:18:59.973540 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"050fa74e0d5b35cc05a738f324a766de749bd42a1ad36a29cc8cc550ffa26da5"} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.014885 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f5a03f4ee8f7cd9bb8332f30f7ec0d13b1172994bee7a6c96db82ea7643f5ee9"} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.014949 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a13b778cb93c31d7d80a4aecf545c4f4da350989466d2b10ce21e52e91726bac"} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.036424 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" event={"ID":"c6167ceb-06dc-4253-892a-99f1c1feffb9","Type":"ContainerStarted","Data":"8b213754034f86be3cd5781069fa1e519ec82f73909b55d73933b3aaf2282998"} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.040625 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce76fc2773aa2371a0e9c8f8ddb993613ee8d7fc1fe7869bbf237bde35f4ce32"} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.044924 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.045206 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.045241 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xp7v\" (UniqueName: \"kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.045284 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.045819 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.045903 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.545887008 +0000 UTC m=+153.141509534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.047155 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.062998 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hsrb8" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.067384 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.068206 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.087327 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.087644 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.097504 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.101751 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xp7v\" (UniqueName: \"kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v\") pod \"redhat-marketplace-b2w9f\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.151419 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.151523 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.151585 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.151945 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.651932422 +0000 UTC m=+153.247554948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.205509 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.222752 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:00 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:00 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:00 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.223236 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.252270 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.252685 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.252791 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.253583 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.753566404 +0000 UTC m=+153.349188930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.253631 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.291102 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.353919 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.354315 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.854299523 +0000 UTC m=+153.449922059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.435078 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.456086 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.456622 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:00.956601915 +0000 UTC m=+153.552224441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.530324 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.557470 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.557936 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:01.05792304 +0000 UTC m=+153.653545566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: W0127 07:19:00.561193 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeb5a7ef_0513_4f04_8232_9490e959628d.slice/crio-0f827a78cafa65bd18bd803bdb545b72f053672fe7650b2417440fb0640bf1c5 WatchSource:0}: Error finding container 0f827a78cafa65bd18bd803bdb545b72f053672fe7650b2417440fb0640bf1c5: Status 404 returned error can't find the container with id 0f827a78cafa65bd18bd803bdb545b72f053672fe7650b2417440fb0640bf1c5 Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.573677 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.575511 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.583877 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.617029 4764 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T07:18:59.695141901Z","Handler":null,"Name":""} Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.638107 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.652038 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.658957 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.659163 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:19:01.159131862 +0000 UTC m=+153.754754388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.659369 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljtm\" (UniqueName: \"kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.659427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.659490 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.659521 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: E0127 07:19:00.659864 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:19:01.159851202 +0000 UTC m=+153.755473728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwsfp" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.663283 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.680776 4764 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.680859 4764 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.681865 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.682540 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.687309 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.734647 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.764754 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.764982 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.765028 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.765077 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.765112 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljtm\" (UniqueName: \"kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.765136 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.765801 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.766613 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.798857 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.813350 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljtm\" (UniqueName: \"kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm\") pod \"redhat-operators-ddpln\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.866764 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.866857 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.866896 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.866976 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.870597 4764 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.870629 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.890255 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.898454 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwsfp\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.899399 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.944870 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.974420 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.979797 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.986987 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.987241 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.987496 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.996763 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.998029 4764 patch_prober.go:28] interesting pod/apiserver-76f77b778f-drm8b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]log ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]etcd ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/max-in-flight-filter ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 07:19:00 crc kubenswrapper[4764]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 07:19:00 crc kubenswrapper[4764]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/openshift.io-startinformers ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 07:19:00 crc kubenswrapper[4764]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 07:19:00 crc kubenswrapper[4764]: livez check failed Jan 27 07:19:00 crc kubenswrapper[4764]: I0127 07:19:00.998080 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" podUID="c0c0690a-4da4-49d6-9376-21d558d9df3c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.037208 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.045776 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.052047 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmc64" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.075254 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.075324 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b957w\" (UniqueName: \"kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.075398 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.087133 4764 generic.go:334] "Generic (PLEG): container finished" podID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerID="c3d7c29efb61f8d31448433be8fd205c568c5f80cee37727249fd5a8727098b2" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.087200 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerDied","Data":"c3d7c29efb61f8d31448433be8fd205c568c5f80cee37727249fd5a8727098b2"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.142918 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.155828 4764 generic.go:334] "Generic (PLEG): container finished" podID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerID="52d309afb5c00cb13935d751526b9d93c4db01a4540f7b5cf98a556d53b2e89b" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.155947 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerDied","Data":"52d309afb5c00cb13935d751526b9d93c4db01a4540f7b5cf98a556d53b2e89b"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.169362 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb5a7ef-0513-4f04-8232-9490e959628d" containerID="329ad7390da28ee8b509313d0ec61d394834a11156fbd0bb2c38ee7c8bfaafbf" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.169458 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerDied","Data":"329ad7390da28ee8b509313d0ec61d394834a11156fbd0bb2c38ee7c8bfaafbf"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.169494 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerStarted","Data":"0f827a78cafa65bd18bd803bdb545b72f053672fe7650b2417440fb0640bf1c5"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.177388 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.177449 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b957w\" (UniqueName: \"kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.177517 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.178351 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.179840 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.179939 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.193619 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:01 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:01 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:01 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.193707 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.198069 4764 generic.go:334] "Generic (PLEG): container finished" podID="b4c388c4-2071-4b3b-97b6-52aec664b967" containerID="e9e7949a9102249840104a22988ccdbafd29b2a1dda0d3d687a8a7ba89386e89" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.198198 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" event={"ID":"b4c388c4-2071-4b3b-97b6-52aec664b967","Type":"ContainerDied","Data":"e9e7949a9102249840104a22988ccdbafd29b2a1dda0d3d687a8a7ba89386e89"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.235028 4764 generic.go:334] "Generic (PLEG): container finished" podID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerID="c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.235211 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerDied","Data":"c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.235251 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerStarted","Data":"74fcb60d9214fc47f1e8e8eb9730fc30d2c18bfd2a32de538500cb27276eab3b"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.260174 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b957w\" (UniqueName: \"kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w\") pod \"redhat-operators-6rm5p\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.261584 4764 patch_prober.go:28] interesting pod/downloads-7954f5f757-989np container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.261629 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-989np" podUID="b866a424-f51d-42cc-9ac6-0656d94083b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.262403 4764 patch_prober.go:28] interesting pod/downloads-7954f5f757-989np container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.262430 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-989np" podUID="b866a424-f51d-42cc-9ac6-0656d94083b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.273005 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016","Type":"ContainerStarted","Data":"f8196bfbcf17a9706d4ad520f622e0548183a3b3668a1fd2ad2fd43fc2f3e02a"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.276306 4764 generic.go:334] "Generic (PLEG): container finished" podID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerID="17124029e0f7886f9068c5dbb2a517225cdbac13d0b0b3a32e1d6c12bffbd563" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.276649 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerDied","Data":"17124029e0f7886f9068c5dbb2a517225cdbac13d0b0b3a32e1d6c12bffbd563"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.276702 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerStarted","Data":"3374f7b0306b3417b1cb5eba6bbbb98a4848ced8f4f6fb70a226fe64ec74c64f"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.282753 4764 generic.go:334] "Generic (PLEG): container finished" podID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerID="b106d7da0f6fe1591ba60d291a38930091d2e85d0943d16e991defb194dff610" exitCode=0 Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.282831 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerDied","Data":"b106d7da0f6fe1591ba60d291a38930091d2e85d0943d16e991defb194dff610"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.289253 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.328279 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" event={"ID":"c6167ceb-06dc-4253-892a-99f1c1feffb9","Type":"ContainerStarted","Data":"b3491900352bebdf19173e8d085de4a7a57eb8f27d43dd06a7471c50664d0063"} Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.349210 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.364839 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.436883 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hkrl5" podStartSLOduration=13.436856752 podStartE2EDuration="13.436856752s" podCreationTimestamp="2026-01-27 07:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:19:01.431794053 +0000 UTC m=+154.027416579" watchObservedRunningTime="2026-01-27 07:19:01.436856752 +0000 UTC m=+154.032479278" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.545676 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.607669 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.607915 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.610372 4764 patch_prober.go:28] interesting pod/console-f9d7485db-dk6gm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.610490 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dk6gm" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.676020 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.770385 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:19:01 crc kubenswrapper[4764]: I0127 07:19:01.871876 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.185529 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:02 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:02 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:02 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.185601 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.357733 4764 generic.go:334] "Generic (PLEG): container finished" podID="d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" containerID="46e8ade65707652fe06715b4c5ea1348620c9d64c5b153873317b38351ec74aa" exitCode=0 Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.357909 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016","Type":"ContainerDied","Data":"46e8ade65707652fe06715b4c5ea1348620c9d64c5b153873317b38351ec74aa"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.369293 4764 generic.go:334] "Generic (PLEG): container finished" podID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerID="02e3489cf3931aed62701d5ec66a8e13573e1a2e7033a7c6d6b42419fa8d43e6" exitCode=0 Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.369359 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerDied","Data":"02e3489cf3931aed62701d5ec66a8e13573e1a2e7033a7c6d6b42419fa8d43e6"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.369390 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerStarted","Data":"65affe3d76602d14e3ac1064fef257173728149948badcdf2026f012f9755316"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.373273 4764 generic.go:334] "Generic (PLEG): container finished" podID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerID="e42158ba9b0e0cb93922c3a48440f097db35d5f82e7b8dc5fef663a8fa357b96" exitCode=0 Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.373327 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerDied","Data":"e42158ba9b0e0cb93922c3a48440f097db35d5f82e7b8dc5fef663a8fa357b96"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.373352 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerStarted","Data":"ccb6460b230fd65f02204b14d959a74796915fc31749c7862334abad232e8ddd"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.389429 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" event={"ID":"45ab3850-93b1-42f7-9a7d-243951b7a0d4","Type":"ContainerStarted","Data":"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.389518 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" event={"ID":"45ab3850-93b1-42f7-9a7d-243951b7a0d4","Type":"ContainerStarted","Data":"bfaab6d41866cb949183eb120696fb759edccedb4da3a9f8e26bf17355aa4aaf"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.389657 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.396971 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ae40783e-3486-421b-b86c-9229ba9799d8","Type":"ContainerStarted","Data":"a11da60d196b1633e2194bbd4ecd11b7aadf362ec84ac4ed4a0a47c41ab87601"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.397014 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ae40783e-3486-421b-b86c-9229ba9799d8","Type":"ContainerStarted","Data":"eaf5e2f5cc37ab357dd68dbba53d9b7f78ed69d91e7c1e8184bf3553236a25c7"} Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.460812 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.492271 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" podStartSLOduration=133.492246977 podStartE2EDuration="2m13.492246977s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:19:02.489216854 +0000 UTC m=+155.084839380" watchObservedRunningTime="2026-01-27 07:19:02.492246977 +0000 UTC m=+155.087869493" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.547089 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.547063418 podStartE2EDuration="2.547063418s" podCreationTimestamp="2026-01-27 07:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:19:02.543450149 +0000 UTC m=+155.139072675" watchObservedRunningTime="2026-01-27 07:19:02.547063418 +0000 UTC m=+155.142685944" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.869194 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.911767 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") pod \"b4c388c4-2071-4b3b-97b6-52aec664b967\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.911890 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bqlk\" (UniqueName: \"kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk\") pod \"b4c388c4-2071-4b3b-97b6-52aec664b967\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.911920 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") pod \"b4c388c4-2071-4b3b-97b6-52aec664b967\" (UID: \"b4c388c4-2071-4b3b-97b6-52aec664b967\") " Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.915796 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4c388c4-2071-4b3b-97b6-52aec664b967" (UID: "b4c388c4-2071-4b3b-97b6-52aec664b967"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.923229 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk" (OuterVolumeSpecName: "kube-api-access-7bqlk") pod "b4c388c4-2071-4b3b-97b6-52aec664b967" (UID: "b4c388c4-2071-4b3b-97b6-52aec664b967"). InnerVolumeSpecName "kube-api-access-7bqlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:02 crc kubenswrapper[4764]: I0127 07:19:02.925687 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4c388c4-2071-4b3b-97b6-52aec664b967" (UID: "b4c388c4-2071-4b3b-97b6-52aec664b967"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.016158 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bqlk\" (UniqueName: \"kubernetes.io/projected/b4c388c4-2071-4b3b-97b6-52aec664b967-kube-api-access-7bqlk\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.016211 4764 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4c388c4-2071-4b3b-97b6-52aec664b967-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.016226 4764 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c388c4-2071-4b3b-97b6-52aec664b967-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.184079 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:03 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:03 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:03 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.184188 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.405638 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" event={"ID":"b4c388c4-2071-4b3b-97b6-52aec664b967","Type":"ContainerDied","Data":"3aa8a46b744498fb757c38151d946b9abaa625ce51e6f412476579abafe4759d"} Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.405706 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa8a46b744498fb757c38151d946b9abaa625ce51e6f412476579abafe4759d" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.405757 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.407589 4764 generic.go:334] "Generic (PLEG): container finished" podID="ae40783e-3486-421b-b86c-9229ba9799d8" containerID="a11da60d196b1633e2194bbd4ecd11b7aadf362ec84ac4ed4a0a47c41ab87601" exitCode=0 Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.407744 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ae40783e-3486-421b-b86c-9229ba9799d8","Type":"ContainerDied","Data":"a11da60d196b1633e2194bbd4ecd11b7aadf362ec84ac4ed4a0a47c41ab87601"} Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.701236 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.842542 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir\") pod \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.842799 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access\") pod \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\" (UID: \"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016\") " Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.843428 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" (UID: "d5f433ea-ea3f-4cea-a3f7-0a9b1456e016"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.848768 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" (UID: "d5f433ea-ea3f-4cea-a3f7-0a9b1456e016"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.944501 4764 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:03 crc kubenswrapper[4764]: I0127 07:19:03.944543 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f433ea-ea3f-4cea-a3f7-0a9b1456e016-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.183154 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:04 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:04 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:04 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.183215 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.446214 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.457995 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5f433ea-ea3f-4cea-a3f7-0a9b1456e016","Type":"ContainerDied","Data":"f8196bfbcf17a9706d4ad520f622e0548183a3b3668a1fd2ad2fd43fc2f3e02a"} Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.458060 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8196bfbcf17a9706d4ad520f622e0548183a3b3668a1fd2ad2fd43fc2f3e02a" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.753505 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.857686 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir\") pod \"ae40783e-3486-421b-b86c-9229ba9799d8\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.857799 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae40783e-3486-421b-b86c-9229ba9799d8" (UID: "ae40783e-3486-421b-b86c-9229ba9799d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.858363 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access\") pod \"ae40783e-3486-421b-b86c-9229ba9799d8\" (UID: \"ae40783e-3486-421b-b86c-9229ba9799d8\") " Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.858680 4764 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae40783e-3486-421b-b86c-9229ba9799d8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.880734 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae40783e-3486-421b-b86c-9229ba9799d8" (UID: "ae40783e-3486-421b-b86c-9229ba9799d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:04 crc kubenswrapper[4764]: I0127 07:19:04.960559 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae40783e-3486-421b-b86c-9229ba9799d8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.184326 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:05 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:05 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:05 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.184391 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.495385 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ae40783e-3486-421b-b86c-9229ba9799d8","Type":"ContainerDied","Data":"eaf5e2f5cc37ab357dd68dbba53d9b7f78ed69d91e7c1e8184bf3553236a25c7"} Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.495467 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf5e2f5cc37ab357dd68dbba53d9b7f78ed69d91e7c1e8184bf3553236a25c7" Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.495588 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:19:05 crc kubenswrapper[4764]: I0127 07:19:05.994916 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:19:06 crc kubenswrapper[4764]: I0127 07:19:06.000866 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-drm8b" Jan 27 07:19:06 crc kubenswrapper[4764]: I0127 07:19:06.188142 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:06 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:06 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:06 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:06 crc kubenswrapper[4764]: I0127 07:19:06.188235 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:07 crc kubenswrapper[4764]: I0127 07:19:07.136213 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7d5nf" Jan 27 07:19:07 crc kubenswrapper[4764]: I0127 07:19:07.183479 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:07 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:07 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:07 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:07 crc kubenswrapper[4764]: I0127 07:19:07.183582 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:07 crc kubenswrapper[4764]: I0127 07:19:07.765629 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:19:08 crc kubenswrapper[4764]: I0127 07:19:08.183041 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:08 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:08 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:08 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:08 crc kubenswrapper[4764]: I0127 07:19:08.183625 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:09 crc kubenswrapper[4764]: I0127 07:19:09.183592 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:09 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:09 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:09 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:09 crc kubenswrapper[4764]: I0127 07:19:09.184313 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:10 crc kubenswrapper[4764]: I0127 07:19:10.182954 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:10 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:10 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:10 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:10 crc kubenswrapper[4764]: I0127 07:19:10.183069 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.182892 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:11 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:11 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:11 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.182983 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.269465 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-989np" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.609132 4764 patch_prober.go:28] interesting pod/console-f9d7485db-dk6gm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.609203 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dk6gm" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.813639 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.836891 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a5473d6-3349-44a0-8a36-4112062a89a6-metrics-certs\") pod \"network-metrics-daemon-crfqf\" (UID: \"6a5473d6-3349-44a0-8a36-4112062a89a6\") " pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:19:11 crc kubenswrapper[4764]: I0127 07:19:11.894570 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-crfqf" Jan 27 07:19:12 crc kubenswrapper[4764]: I0127 07:19:12.184098 4764 patch_prober.go:28] interesting pod/router-default-5444994796-kpqkv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:19:12 crc kubenswrapper[4764]: [-]has-synced failed: reason withheld Jan 27 07:19:12 crc kubenswrapper[4764]: [+]process-running ok Jan 27 07:19:12 crc kubenswrapper[4764]: healthz check failed Jan 27 07:19:12 crc kubenswrapper[4764]: I0127 07:19:12.184190 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kpqkv" podUID="5977cde5-6561-49a6-923d-74f32d8d74a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:19:13 crc kubenswrapper[4764]: I0127 07:19:13.183303 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:19:13 crc kubenswrapper[4764]: I0127 07:19:13.186827 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kpqkv" Jan 27 07:19:15 crc kubenswrapper[4764]: I0127 07:19:15.174790 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-crfqf"] Jan 27 07:19:20 crc kubenswrapper[4764]: I0127 07:19:20.997020 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:19:21 crc kubenswrapper[4764]: I0127 07:19:21.612494 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:19:21 crc kubenswrapper[4764]: I0127 07:19:21.616753 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:19:23 crc kubenswrapper[4764]: I0127 07:19:23.762699 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:19:23 crc kubenswrapper[4764]: I0127 07:19:23.763201 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:19:30 crc kubenswrapper[4764]: E0127 07:19:30.134650 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 07:19:30 crc kubenswrapper[4764]: E0127 07:19:30.135663 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ljtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ddpln_openshift-marketplace(ba301678-dac1-45dd-a1fc-6db20a2f38aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:19:30 crc kubenswrapper[4764]: E0127 07:19:30.136850 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ddpln" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" Jan 27 07:19:31 crc kubenswrapper[4764]: W0127 07:19:31.452637 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a5473d6_3349_44a0_8a36_4112062a89a6.slice/crio-d62d9983050a546db83f7060c33beffe45d7ee360d67f67c35ce8602d4aa2bcc WatchSource:0}: Error finding container d62d9983050a546db83f7060c33beffe45d7ee360d67f67c35ce8602d4aa2bcc: Status 404 returned error can't find the container with id d62d9983050a546db83f7060c33beffe45d7ee360d67f67c35ce8602d4aa2bcc Jan 27 07:19:31 crc kubenswrapper[4764]: E0127 07:19:31.518831 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 07:19:31 crc kubenswrapper[4764]: E0127 07:19:31.519043 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2zbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-94qqk_openshift-marketplace(23aafd6e-3e01-40a4-82bb-4315c225500b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:19:31 crc kubenswrapper[4764]: E0127 07:19:31.520259 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-94qqk" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" Jan 27 07:19:31 crc kubenswrapper[4764]: I0127 07:19:31.721947 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-crfqf" event={"ID":"6a5473d6-3349-44a0-8a36-4112062a89a6","Type":"ContainerStarted","Data":"d62d9983050a546db83f7060c33beffe45d7ee360d67f67c35ce8602d4aa2bcc"} Jan 27 07:19:31 crc kubenswrapper[4764]: I0127 07:19:31.800883 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n9bt8" Jan 27 07:19:32 crc kubenswrapper[4764]: E0127 07:19:32.945957 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ddpln" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" Jan 27 07:19:32 crc kubenswrapper[4764]: E0127 07:19:32.947277 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-94qqk" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.012356 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.012580 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj44t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mjcjk_openshift-marketplace(feb5a7ef-0513-4f04-8232-9490e959628d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.013764 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mjcjk" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.134602 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.134815 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvzdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pblvj_openshift-marketplace(6ab6b260-c1b8-49ff-aa32-54abd16f0b66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.136211 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pblvj" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.153489 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.153781 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xp7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b2w9f_openshift-marketplace(682f26c7-1456-4103-b2be-9f3f92eb643d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.154888 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b2w9f" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.736900 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerStarted","Data":"89a440d809c86396b10a51f317f2b284ff5465575442af13375c7443b0c7eeaa"} Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.739706 4764 generic.go:334] "Generic (PLEG): container finished" podID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerID="896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab" exitCode=0 Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.739794 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerDied","Data":"896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab"} Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.745220 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-crfqf" event={"ID":"6a5473d6-3349-44a0-8a36-4112062a89a6","Type":"ContainerStarted","Data":"08ed53e79de077b328d0c87fa5554a368ac4276b98da2b2b1e2b45461af155cc"} Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.745267 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-crfqf" event={"ID":"6a5473d6-3349-44a0-8a36-4112062a89a6","Type":"ContainerStarted","Data":"28373ccedc06f7253c4c392a3eeb721467165c5b147798c58b2dbea609260359"} Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.748306 4764 generic.go:334] "Generic (PLEG): container finished" podID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerID="acf50b31666bd7cd33957749b5554e0a19a17b418450b9698a1022a653449e3b" exitCode=0 Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.749822 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerDied","Data":"acf50b31666bd7cd33957749b5554e0a19a17b418450b9698a1022a653449e3b"} Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.749966 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mjcjk" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.753958 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b2w9f" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" Jan 27 07:19:33 crc kubenswrapper[4764]: E0127 07:19:33.754019 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pblvj" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" Jan 27 07:19:33 crc kubenswrapper[4764]: I0127 07:19:33.810743 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-crfqf" podStartSLOduration=164.810716168 podStartE2EDuration="2m44.810716168s" podCreationTimestamp="2026-01-27 07:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:19:33.806529323 +0000 UTC m=+186.402151859" watchObservedRunningTime="2026-01-27 07:19:33.810716168 +0000 UTC m=+186.406338694" Jan 27 07:19:34 crc kubenswrapper[4764]: I0127 07:19:34.760156 4764 generic.go:334] "Generic (PLEG): container finished" podID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerID="89a440d809c86396b10a51f317f2b284ff5465575442af13375c7443b0c7eeaa" exitCode=0 Jan 27 07:19:34 crc kubenswrapper[4764]: I0127 07:19:34.760349 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerDied","Data":"89a440d809c86396b10a51f317f2b284ff5465575442af13375c7443b0c7eeaa"} Jan 27 07:19:35 crc kubenswrapper[4764]: I0127 07:19:35.772342 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerStarted","Data":"edd2c977c0f032a5d87d6b85dd34dd6305f612090d2dca6134d7c4a7db0c3aa5"} Jan 27 07:19:35 crc kubenswrapper[4764]: I0127 07:19:35.777265 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerStarted","Data":"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9"} Jan 27 07:19:35 crc kubenswrapper[4764]: I0127 07:19:35.783998 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerStarted","Data":"3624d0b4a219c0a4f6af9623f18b4107eda114182ec006f2cdf0f197a28faaa3"} Jan 27 07:19:35 crc kubenswrapper[4764]: I0127 07:19:35.800567 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6rm5p" podStartSLOduration=2.859530837 podStartE2EDuration="35.800543645s" podCreationTimestamp="2026-01-27 07:19:00 +0000 UTC" firstStartedPulling="2026-01-27 07:19:02.376755974 +0000 UTC m=+154.972378500" lastFinishedPulling="2026-01-27 07:19:35.317768782 +0000 UTC m=+187.913391308" observedRunningTime="2026-01-27 07:19:35.792031681 +0000 UTC m=+188.387654207" watchObservedRunningTime="2026-01-27 07:19:35.800543645 +0000 UTC m=+188.396166191" Jan 27 07:19:36 crc kubenswrapper[4764]: I0127 07:19:36.818653 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5l9p" podStartSLOduration=5.764102331 podStartE2EDuration="39.818621896s" podCreationTimestamp="2026-01-27 07:18:57 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.25494348 +0000 UTC m=+153.850566006" lastFinishedPulling="2026-01-27 07:19:35.309463045 +0000 UTC m=+187.905085571" observedRunningTime="2026-01-27 07:19:35.820163292 +0000 UTC m=+188.415785828" watchObservedRunningTime="2026-01-27 07:19:36.818621896 +0000 UTC m=+189.414244462" Jan 27 07:19:37 crc kubenswrapper[4764]: I0127 07:19:37.771046 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:19:37 crc kubenswrapper[4764]: I0127 07:19:37.787893 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlsvp" podStartSLOduration=6.502465884 podStartE2EDuration="40.787864092s" podCreationTimestamp="2026-01-27 07:18:57 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.161361367 +0000 UTC m=+153.756983893" lastFinishedPulling="2026-01-27 07:19:35.446759565 +0000 UTC m=+188.042382101" observedRunningTime="2026-01-27 07:19:36.820786326 +0000 UTC m=+189.416408852" watchObservedRunningTime="2026-01-27 07:19:37.787864092 +0000 UTC m=+190.383486618" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.017500 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.017567 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.369896 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.370424 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627352 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:19:38 crc kubenswrapper[4764]: E0127 07:19:38.627675 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae40783e-3486-421b-b86c-9229ba9799d8" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627692 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae40783e-3486-421b-b86c-9229ba9799d8" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: E0127 07:19:38.627705 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c388c4-2071-4b3b-97b6-52aec664b967" containerName="collect-profiles" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627718 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c388c4-2071-4b3b-97b6-52aec664b967" containerName="collect-profiles" Jan 27 07:19:38 crc kubenswrapper[4764]: E0127 07:19:38.627726 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627732 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627855 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae40783e-3486-421b-b86c-9229ba9799d8" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627871 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c388c4-2071-4b3b-97b6-52aec664b967" containerName="collect-profiles" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.627882 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f433ea-ea3f-4cea-a3f7-0a9b1456e016" containerName="pruner" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.630550 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.633306 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.634785 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.654599 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.737405 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.737565 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.838955 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.839274 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.839377 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.859685 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:38 crc kubenswrapper[4764]: I0127 07:19:38.953590 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:39 crc kubenswrapper[4764]: I0127 07:19:39.173751 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nlsvp" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="registry-server" probeResult="failure" output=< Jan 27 07:19:39 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:19:39 crc kubenswrapper[4764]: > Jan 27 07:19:39 crc kubenswrapper[4764]: I0127 07:19:39.428024 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d5l9p" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="registry-server" probeResult="failure" output=< Jan 27 07:19:39 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:19:39 crc kubenswrapper[4764]: > Jan 27 07:19:39 crc kubenswrapper[4764]: I0127 07:19:39.440652 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:19:39 crc kubenswrapper[4764]: W0127 07:19:39.459127 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70c1a498_d241_4fc2_8908_12c71abba312.slice/crio-3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d WatchSource:0}: Error finding container 3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d: Status 404 returned error can't find the container with id 3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d Jan 27 07:19:39 crc kubenswrapper[4764]: I0127 07:19:39.811167 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70c1a498-d241-4fc2-8908-12c71abba312","Type":"ContainerStarted","Data":"3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d"} Jan 27 07:19:40 crc kubenswrapper[4764]: I0127 07:19:40.819519 4764 generic.go:334] "Generic (PLEG): container finished" podID="70c1a498-d241-4fc2-8908-12c71abba312" containerID="5b105027f9466fb53631a9655b0d336b7301385e2558334cab15e76717367f00" exitCode=0 Jan 27 07:19:40 crc kubenswrapper[4764]: I0127 07:19:40.819635 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70c1a498-d241-4fc2-8908-12c71abba312","Type":"ContainerDied","Data":"5b105027f9466fb53631a9655b0d336b7301385e2558334cab15e76717367f00"} Jan 27 07:19:41 crc kubenswrapper[4764]: I0127 07:19:41.367016 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:41 crc kubenswrapper[4764]: I0127 07:19:41.367092 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.223282 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.321933 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir\") pod \"70c1a498-d241-4fc2-8908-12c71abba312\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.322116 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70c1a498-d241-4fc2-8908-12c71abba312" (UID: "70c1a498-d241-4fc2-8908-12c71abba312"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.322150 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access\") pod \"70c1a498-d241-4fc2-8908-12c71abba312\" (UID: \"70c1a498-d241-4fc2-8908-12c71abba312\") " Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.322820 4764 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c1a498-d241-4fc2-8908-12c71abba312-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.329565 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70c1a498-d241-4fc2-8908-12c71abba312" (UID: "70c1a498-d241-4fc2-8908-12c71abba312"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.424205 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c1a498-d241-4fc2-8908-12c71abba312-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.434675 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6rm5p" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="registry-server" probeResult="failure" output=< Jan 27 07:19:42 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:19:42 crc kubenswrapper[4764]: > Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.835591 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70c1a498-d241-4fc2-8908-12c71abba312","Type":"ContainerDied","Data":"3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d"} Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.835994 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3afd3a569987b77e6b235ff7d9301cf92886ed180b212c3d656eb7e22cc6008d" Jan 27 07:19:42 crc kubenswrapper[4764]: I0127 07:19:42.836071 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.423883 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:19:45 crc kubenswrapper[4764]: E0127 07:19:45.424178 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c1a498-d241-4fc2-8908-12c71abba312" containerName="pruner" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.424191 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c1a498-d241-4fc2-8908-12c71abba312" containerName="pruner" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.424296 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c1a498-d241-4fc2-8908-12c71abba312" containerName="pruner" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.424760 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.427991 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.428378 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.433050 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.464410 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.464532 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.464787 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.565976 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.566408 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.566548 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.566594 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.566579 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.589528 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:45 crc kubenswrapper[4764]: I0127 07:19:45.745121 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:19:46 crc kubenswrapper[4764]: I0127 07:19:46.158315 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:19:46 crc kubenswrapper[4764]: W0127 07:19:46.166200 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6c0f57a_a21d_44db_b8f8_88a40a1cdce8.slice/crio-c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b WatchSource:0}: Error finding container c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b: Status 404 returned error can't find the container with id c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b Jan 27 07:19:46 crc kubenswrapper[4764]: I0127 07:19:46.875504 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8","Type":"ContainerStarted","Data":"2a9dabf1dc7bdae14727cb7916e1279e56b8a36aca3c4d0cea0e2fd04e5f933d"} Jan 27 07:19:46 crc kubenswrapper[4764]: I0127 07:19:46.875573 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8","Type":"ContainerStarted","Data":"c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b"} Jan 27 07:19:47 crc kubenswrapper[4764]: I0127 07:19:47.461847 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.461821417 podStartE2EDuration="2.461821417s" podCreationTimestamp="2026-01-27 07:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:19:46.90331389 +0000 UTC m=+199.498936416" watchObservedRunningTime="2026-01-27 07:19:47.461821417 +0000 UTC m=+200.057443943" Jan 27 07:19:47 crc kubenswrapper[4764]: I0127 07:19:47.884037 4764 generic.go:334] "Generic (PLEG): container finished" podID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerID="ec2a85f7043799abcc4a27a511949533bd5e6e938672d50cea1bb8ccaca06c42" exitCode=0 Jan 27 07:19:47 crc kubenswrapper[4764]: I0127 07:19:47.884135 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerDied","Data":"ec2a85f7043799abcc4a27a511949533bd5e6e938672d50cea1bb8ccaca06c42"} Jan 27 07:19:47 crc kubenswrapper[4764]: I0127 07:19:47.887157 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb5a7ef-0513-4f04-8232-9490e959628d" containerID="095c7b7e6f43f2c3b16bde80679e58c478d2b5ab164dd4ec31ee1b86adb5055f" exitCode=0 Jan 27 07:19:47 crc kubenswrapper[4764]: I0127 07:19:47.887287 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerDied","Data":"095c7b7e6f43f2c3b16bde80679e58c478d2b5ab164dd4ec31ee1b86adb5055f"} Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.074549 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.131213 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.427391 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.469375 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.894891 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerStarted","Data":"d89860d13ab662dcc8d7989c236d905b597c28bd0d41c6a49b2debb659552f1f"} Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.898025 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerStarted","Data":"8b90b1d93745177516bc8d6de9ca75b218776061e967a4eade74602213c892b0"} Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.900633 4764 generic.go:334] "Generic (PLEG): container finished" podID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerID="4f8f1b1ad56f4cc92a88dbbbc888dd2905d74a09aac02561bb10e7e6403bd902" exitCode=0 Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.900692 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerDied","Data":"4f8f1b1ad56f4cc92a88dbbbc888dd2905d74a09aac02561bb10e7e6403bd902"} Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.903369 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerStarted","Data":"5665df7008dff6624aa8c5c8e4c97f3106b7040f49b590d5fa06381287a1d603"} Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.922703 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-94qqk" podStartSLOduration=4.902820333 podStartE2EDuration="51.922671466s" podCreationTimestamp="2026-01-27 07:18:57 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.288769786 +0000 UTC m=+153.884392302" lastFinishedPulling="2026-01-27 07:19:48.308620909 +0000 UTC m=+200.904243435" observedRunningTime="2026-01-27 07:19:48.922432709 +0000 UTC m=+201.518055235" watchObservedRunningTime="2026-01-27 07:19:48.922671466 +0000 UTC m=+201.518293992" Jan 27 07:19:48 crc kubenswrapper[4764]: I0127 07:19:48.983975 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjcjk" podStartSLOduration=2.6173538450000002 podStartE2EDuration="49.983953534s" podCreationTimestamp="2026-01-27 07:18:59 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.175901195 +0000 UTC m=+153.771523721" lastFinishedPulling="2026-01-27 07:19:48.542500884 +0000 UTC m=+201.138123410" observedRunningTime="2026-01-27 07:19:48.951581358 +0000 UTC m=+201.547203884" watchObservedRunningTime="2026-01-27 07:19:48.983953534 +0000 UTC m=+201.579576060" Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.752553 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.752617 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.912262 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerStarted","Data":"8d68bb22e60a8482916717f779cdddf9857ec60a63bf8b83437117a399b1919b"} Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.914371 4764 generic.go:334] "Generic (PLEG): container finished" podID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerID="5665df7008dff6624aa8c5c8e4c97f3106b7040f49b590d5fa06381287a1d603" exitCode=0 Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.914463 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerDied","Data":"5665df7008dff6624aa8c5c8e4c97f3106b7040f49b590d5fa06381287a1d603"} Jan 27 07:19:49 crc kubenswrapper[4764]: I0127 07:19:49.937697 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2w9f" podStartSLOduration=2.919313925 podStartE2EDuration="50.937647234s" podCreationTimestamp="2026-01-27 07:18:59 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.277739464 +0000 UTC m=+153.873361990" lastFinishedPulling="2026-01-27 07:19:49.296072773 +0000 UTC m=+201.891695299" observedRunningTime="2026-01-27 07:19:49.934339573 +0000 UTC m=+202.529962099" watchObservedRunningTime="2026-01-27 07:19:49.937647234 +0000 UTC m=+202.533269760" Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.207375 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.207650 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.795879 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mjcjk" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="registry-server" probeResult="failure" output=< Jan 27 07:19:50 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:19:50 crc kubenswrapper[4764]: > Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.922722 4764 generic.go:334] "Generic (PLEG): container finished" podID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerID="15f302c91a9569a605bb47ebc6a917497ea0f6c4827ab32517783b0cb003a7d3" exitCode=0 Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.922827 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerDied","Data":"15f302c91a9569a605bb47ebc6a917497ea0f6c4827ab32517783b0cb003a7d3"} Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.925684 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerStarted","Data":"4c496ca6c34dd6e8af4a9efb1dca002ad2f883be05211461029dd6275ab86a51"} Jan 27 07:19:50 crc kubenswrapper[4764]: I0127 07:19:50.973815 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddpln" podStartSLOduration=3.06669214 podStartE2EDuration="50.97378835s" podCreationTimestamp="2026-01-27 07:19:00 +0000 UTC" firstStartedPulling="2026-01-27 07:19:02.376457056 +0000 UTC m=+154.972079582" lastFinishedPulling="2026-01-27 07:19:50.283553266 +0000 UTC m=+202.879175792" observedRunningTime="2026-01-27 07:19:50.972129615 +0000 UTC m=+203.567752131" watchObservedRunningTime="2026-01-27 07:19:50.97378835 +0000 UTC m=+203.569410866" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.249543 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-b2w9f" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="registry-server" probeResult="failure" output=< Jan 27 07:19:51 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:19:51 crc kubenswrapper[4764]: > Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.269526 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.269986 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5l9p" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="registry-server" containerID="cri-o://fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9" gracePeriod=2 Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.419178 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.484163 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.713630 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.781070 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content\") pod \"84b43e28-67a7-4296-85aa-cc66fdfc2449\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.781137 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbm5x\" (UniqueName: \"kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x\") pod \"84b43e28-67a7-4296-85aa-cc66fdfc2449\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.781313 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities\") pod \"84b43e28-67a7-4296-85aa-cc66fdfc2449\" (UID: \"84b43e28-67a7-4296-85aa-cc66fdfc2449\") " Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.782353 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities" (OuterVolumeSpecName: "utilities") pod "84b43e28-67a7-4296-85aa-cc66fdfc2449" (UID: "84b43e28-67a7-4296-85aa-cc66fdfc2449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.788503 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x" (OuterVolumeSpecName: "kube-api-access-hbm5x") pod "84b43e28-67a7-4296-85aa-cc66fdfc2449" (UID: "84b43e28-67a7-4296-85aa-cc66fdfc2449"). InnerVolumeSpecName "kube-api-access-hbm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.843219 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84b43e28-67a7-4296-85aa-cc66fdfc2449" (UID: "84b43e28-67a7-4296-85aa-cc66fdfc2449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.883291 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.883335 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84b43e28-67a7-4296-85aa-cc66fdfc2449-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.883375 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbm5x\" (UniqueName: \"kubernetes.io/projected/84b43e28-67a7-4296-85aa-cc66fdfc2449-kube-api-access-hbm5x\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.932263 4764 generic.go:334] "Generic (PLEG): container finished" podID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerID="fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9" exitCode=0 Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.932305 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerDied","Data":"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9"} Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.932348 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5l9p" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.932363 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5l9p" event={"ID":"84b43e28-67a7-4296-85aa-cc66fdfc2449","Type":"ContainerDied","Data":"74fcb60d9214fc47f1e8e8eb9730fc30d2c18bfd2a32de538500cb27276eab3b"} Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.932388 4764 scope.go:117] "RemoveContainer" containerID="fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.937937 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerStarted","Data":"bf165a0fb2e5cfb0502cae0f6a6cc971a95cfd61357737e10ef721c441436e5f"} Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.959721 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pblvj" podStartSLOduration=4.778525059 podStartE2EDuration="54.959701502s" podCreationTimestamp="2026-01-27 07:18:57 +0000 UTC" firstStartedPulling="2026-01-27 07:19:01.142653164 +0000 UTC m=+153.738275690" lastFinishedPulling="2026-01-27 07:19:51.323829607 +0000 UTC m=+203.919452133" observedRunningTime="2026-01-27 07:19:51.958089748 +0000 UTC m=+204.553712294" watchObservedRunningTime="2026-01-27 07:19:51.959701502 +0000 UTC m=+204.555324028" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.960507 4764 scope.go:117] "RemoveContainer" containerID="896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab" Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.979757 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.984170 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5l9p"] Jan 27 07:19:51 crc kubenswrapper[4764]: I0127 07:19:51.998289 4764 scope.go:117] "RemoveContainer" containerID="c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.016152 4764 scope.go:117] "RemoveContainer" containerID="fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9" Jan 27 07:19:52 crc kubenswrapper[4764]: E0127 07:19:52.016777 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9\": container with ID starting with fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9 not found: ID does not exist" containerID="fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.016831 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9"} err="failed to get container status \"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9\": rpc error: code = NotFound desc = could not find container \"fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9\": container with ID starting with fc26bb7da132e49d95a6a26a88546e5ae8a8c4e46c62ebbd2f190fac26ba8dc9 not found: ID does not exist" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.016905 4764 scope.go:117] "RemoveContainer" containerID="896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab" Jan 27 07:19:52 crc kubenswrapper[4764]: E0127 07:19:52.022257 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab\": container with ID starting with 896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab not found: ID does not exist" containerID="896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.022327 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab"} err="failed to get container status \"896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab\": rpc error: code = NotFound desc = could not find container \"896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab\": container with ID starting with 896159fadcb043f2a7f2eac907e29a2b4562bc2dacc58ccb976e55aa20815eab not found: ID does not exist" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.022374 4764 scope.go:117] "RemoveContainer" containerID="c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c" Jan 27 07:19:52 crc kubenswrapper[4764]: E0127 07:19:52.022858 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c\": container with ID starting with c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c not found: ID does not exist" containerID="c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.022896 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c"} err="failed to get container status \"c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c\": rpc error: code = NotFound desc = could not find container \"c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c\": container with ID starting with c3158ad57ab5405b3d8ab93bdc222db2c11a58c167fd47741110aa59e615d74c not found: ID does not exist" Jan 27 07:19:52 crc kubenswrapper[4764]: I0127 07:19:52.447180 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" path="/var/lib/kubelet/pods/84b43e28-67a7-4296-85aa-cc66fdfc2449/volumes" Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.670347 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.671679 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6rm5p" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="registry-server" containerID="cri-o://edd2c977c0f032a5d87d6b85dd34dd6305f612090d2dca6134d7c4a7db0c3aa5" gracePeriod=2 Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.761930 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.762527 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.762591 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.763503 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:19:53 crc kubenswrapper[4764]: I0127 07:19:53.763573 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13" gracePeriod=600 Jan 27 07:19:54 crc kubenswrapper[4764]: I0127 07:19:54.959176 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13" exitCode=0 Jan 27 07:19:54 crc kubenswrapper[4764]: I0127 07:19:54.959262 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13"} Jan 27 07:19:54 crc kubenswrapper[4764]: I0127 07:19:54.961519 4764 generic.go:334] "Generic (PLEG): container finished" podID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerID="edd2c977c0f032a5d87d6b85dd34dd6305f612090d2dca6134d7c4a7db0c3aa5" exitCode=0 Jan 27 07:19:54 crc kubenswrapper[4764]: I0127 07:19:54.961550 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerDied","Data":"edd2c977c0f032a5d87d6b85dd34dd6305f612090d2dca6134d7c4a7db0c3aa5"} Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.367083 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.435225 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities\") pod \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.435378 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content\") pod \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.435572 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b957w\" (UniqueName: \"kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w\") pod \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\" (UID: \"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa\") " Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.436352 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities" (OuterVolumeSpecName: "utilities") pod "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" (UID: "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.442337 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w" (OuterVolumeSpecName: "kube-api-access-b957w") pod "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" (UID: "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa"). InnerVolumeSpecName "kube-api-access-b957w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.538597 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.538649 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b957w\" (UniqueName: \"kubernetes.io/projected/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-kube-api-access-b957w\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.558768 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" (UID: "03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.640155 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.970552 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rm5p" event={"ID":"03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa","Type":"ContainerDied","Data":"65affe3d76602d14e3ac1064fef257173728149948badcdf2026f012f9755316"} Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.971086 4764 scope.go:117] "RemoveContainer" containerID="edd2c977c0f032a5d87d6b85dd34dd6305f612090d2dca6134d7c4a7db0c3aa5" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.970615 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rm5p" Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.973176 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163"} Jan 27 07:19:55 crc kubenswrapper[4764]: I0127 07:19:55.990953 4764 scope.go:117] "RemoveContainer" containerID="89a440d809c86396b10a51f317f2b284ff5465575442af13375c7443b0c7eeaa" Jan 27 07:19:56 crc kubenswrapper[4764]: I0127 07:19:56.021469 4764 scope.go:117] "RemoveContainer" containerID="02e3489cf3931aed62701d5ec66a8e13573e1a2e7033a7c6d6b42419fa8d43e6" Jan 27 07:19:56 crc kubenswrapper[4764]: I0127 07:19:56.021705 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:56 crc kubenswrapper[4764]: I0127 07:19:56.035931 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6rm5p"] Jan 27 07:19:56 crc kubenswrapper[4764]: I0127 07:19:56.450076 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" path="/var/lib/kubelet/pods/03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa/volumes" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.016897 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.016989 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.067626 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.302027 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.302456 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:19:58 crc kubenswrapper[4764]: I0127 07:19:58.345215 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:19:59 crc kubenswrapper[4764]: I0127 07:19:59.045562 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:19:59 crc kubenswrapper[4764]: I0127 07:19:59.054091 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:19:59 crc kubenswrapper[4764]: I0127 07:19:59.792149 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:19:59 crc kubenswrapper[4764]: I0127 07:19:59.842951 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.251384 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.275714 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.296076 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.900848 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.901783 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:20:00 crc kubenswrapper[4764]: I0127 07:20:00.945077 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:20:01 crc kubenswrapper[4764]: I0127 07:20:01.013946 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-94qqk" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="registry-server" containerID="cri-o://d89860d13ab662dcc8d7989c236d905b597c28bd0d41c6a49b2debb659552f1f" gracePeriod=2 Jan 27 07:20:01 crc kubenswrapper[4764]: I0127 07:20:01.065163 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.022092 4764 generic.go:334] "Generic (PLEG): container finished" podID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerID="d89860d13ab662dcc8d7989c236d905b597c28bd0d41c6a49b2debb659552f1f" exitCode=0 Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.022281 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerDied","Data":"d89860d13ab662dcc8d7989c236d905b597c28bd0d41c6a49b2debb659552f1f"} Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.022935 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94qqk" event={"ID":"23aafd6e-3e01-40a4-82bb-4315c225500b","Type":"ContainerDied","Data":"392f57df3c459c289292c5a2d80650adc5c1311b619637e5bb1d012c396c09f2"} Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.022968 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392f57df3c459c289292c5a2d80650adc5c1311b619637e5bb1d012c396c09f2" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.043525 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.143265 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content\") pod \"23aafd6e-3e01-40a4-82bb-4315c225500b\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.143505 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2zbq\" (UniqueName: \"kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq\") pod \"23aafd6e-3e01-40a4-82bb-4315c225500b\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.143569 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities\") pod \"23aafd6e-3e01-40a4-82bb-4315c225500b\" (UID: \"23aafd6e-3e01-40a4-82bb-4315c225500b\") " Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.144340 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities" (OuterVolumeSpecName: "utilities") pod "23aafd6e-3e01-40a4-82bb-4315c225500b" (UID: "23aafd6e-3e01-40a4-82bb-4315c225500b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.144641 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.150348 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq" (OuterVolumeSpecName: "kube-api-access-z2zbq") pod "23aafd6e-3e01-40a4-82bb-4315c225500b" (UID: "23aafd6e-3e01-40a4-82bb-4315c225500b"). InnerVolumeSpecName "kube-api-access-z2zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.192880 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23aafd6e-3e01-40a4-82bb-4315c225500b" (UID: "23aafd6e-3e01-40a4-82bb-4315c225500b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.246156 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2zbq\" (UniqueName: \"kubernetes.io/projected/23aafd6e-3e01-40a4-82bb-4315c225500b-kube-api-access-z2zbq\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.246201 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aafd6e-3e01-40a4-82bb-4315c225500b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.671040 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:20:02 crc kubenswrapper[4764]: I0127 07:20:02.671356 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2w9f" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="registry-server" containerID="cri-o://8d68bb22e60a8482916717f779cdddf9857ec60a63bf8b83437117a399b1919b" gracePeriod=2 Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.029959 4764 generic.go:334] "Generic (PLEG): container finished" podID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerID="8d68bb22e60a8482916717f779cdddf9857ec60a63bf8b83437117a399b1919b" exitCode=0 Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.030414 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94qqk" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.030192 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerDied","Data":"8d68bb22e60a8482916717f779cdddf9857ec60a63bf8b83437117a399b1919b"} Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.053588 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.057773 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-94qqk"] Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.107062 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.159536 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities\") pod \"682f26c7-1456-4103-b2be-9f3f92eb643d\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.159708 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xp7v\" (UniqueName: \"kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v\") pod \"682f26c7-1456-4103-b2be-9f3f92eb643d\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.159764 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content\") pod \"682f26c7-1456-4103-b2be-9f3f92eb643d\" (UID: \"682f26c7-1456-4103-b2be-9f3f92eb643d\") " Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.160659 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities" (OuterVolumeSpecName: "utilities") pod "682f26c7-1456-4103-b2be-9f3f92eb643d" (UID: "682f26c7-1456-4103-b2be-9f3f92eb643d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.166109 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v" (OuterVolumeSpecName: "kube-api-access-8xp7v") pod "682f26c7-1456-4103-b2be-9f3f92eb643d" (UID: "682f26c7-1456-4103-b2be-9f3f92eb643d"). InnerVolumeSpecName "kube-api-access-8xp7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.187796 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "682f26c7-1456-4103-b2be-9f3f92eb643d" (UID: "682f26c7-1456-4103-b2be-9f3f92eb643d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.261876 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.261928 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f26c7-1456-4103-b2be-9f3f92eb643d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:03 crc kubenswrapper[4764]: I0127 07:20:03.261944 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xp7v\" (UniqueName: \"kubernetes.io/projected/682f26c7-1456-4103-b2be-9f3f92eb643d-kube-api-access-8xp7v\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.038033 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2w9f" event={"ID":"682f26c7-1456-4103-b2be-9f3f92eb643d","Type":"ContainerDied","Data":"3374f7b0306b3417b1cb5eba6bbbb98a4848ced8f4f6fb70a226fe64ec74c64f"} Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.038585 4764 scope.go:117] "RemoveContainer" containerID="8d68bb22e60a8482916717f779cdddf9857ec60a63bf8b83437117a399b1919b" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.038307 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2w9f" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.062496 4764 scope.go:117] "RemoveContainer" containerID="4f8f1b1ad56f4cc92a88dbbbc888dd2905d74a09aac02561bb10e7e6403bd902" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.074003 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.077047 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2w9f"] Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.092389 4764 scope.go:117] "RemoveContainer" containerID="17124029e0f7886f9068c5dbb2a517225cdbac13d0b0b3a32e1d6c12bffbd563" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.446261 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" path="/var/lib/kubelet/pods/23aafd6e-3e01-40a4-82bb-4315c225500b/volumes" Jan 27 07:20:04 crc kubenswrapper[4764]: I0127 07:20:04.447576 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" path="/var/lib/kubelet/pods/682f26c7-1456-4103-b2be-9f3f92eb643d/volumes" Jan 27 07:20:10 crc kubenswrapper[4764]: I0127 07:20:10.629843 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2jhk"] Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.210196 4764 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211738 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211758 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211770 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211776 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211788 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211797 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211807 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211814 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211822 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211830 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211839 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211844 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211860 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211866 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211874 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211880 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211888 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211893 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211902 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211908 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="extract-utilities" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211918 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211924 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="extract-content" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.211934 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.211939 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.212062 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b43e28-67a7-4296-85aa-cc66fdfc2449" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.212075 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="23aafd6e-3e01-40a4-82bb-4315c225500b" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.212085 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="682f26c7-1456-4103-b2be-9f3f92eb643d" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.212094 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="03abda1b-8a84-4af2-a6f6-bdc2e4e44dfa" containerName="registry-server" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.213240 4764 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.213430 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.213741 4764 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214121 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252" gracePeriod=15 Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214227 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5" gracePeriod=15 Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214342 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46" gracePeriod=15 Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214396 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f" gracePeriod=15 Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214563 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14" gracePeriod=15 Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214777 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214807 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214831 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214843 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214856 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214869 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214900 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214911 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214930 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214942 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214970 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.214982 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.214996 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215009 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:20:24 crc kubenswrapper[4764]: E0127 07:20:24.215023 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215036 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215298 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215332 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215395 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215421 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215475 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215497 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.215512 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303077 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303149 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303205 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303255 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303298 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303351 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303379 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.303420 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404238 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404284 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404408 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404420 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404473 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.404317 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405058 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405095 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405146 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405170 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405185 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405168 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405211 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405216 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405215 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:24 crc kubenswrapper[4764]: I0127 07:20:24.405241 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.176138 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.177934 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.179358 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5" exitCode=0 Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.179400 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46" exitCode=0 Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.179410 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14" exitCode=0 Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.179420 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f" exitCode=2 Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.179493 4764 scope.go:117] "RemoveContainer" containerID="5bccf7b595e7593d08cad9333ee231e7b52db2186fe842f7ffff73f6aa97bb9e" Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.182467 4764 generic.go:334] "Generic (PLEG): container finished" podID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" containerID="2a9dabf1dc7bdae14727cb7916e1279e56b8a36aca3c4d0cea0e2fd04e5f933d" exitCode=0 Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.182506 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8","Type":"ContainerDied","Data":"2a9dabf1dc7bdae14727cb7916e1279e56b8a36aca3c4d0cea0e2fd04e5f933d"} Jan 27 07:20:25 crc kubenswrapper[4764]: I0127 07:20:25.183766 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.194644 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.547117 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.548193 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641097 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access\") pod \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641230 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock\") pod \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641347 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" (UID: "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641397 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir\") pod \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\" (UID: \"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641467 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" (UID: "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641833 4764 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.641875 4764 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.646582 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" (UID: "f6c0f57a-a21d-44db-b8f8-88a40a1cdce8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.682587 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.683375 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.684128 4764 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.684411 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.742991 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6c0f57a-a21d-44db-b8f8-88a40a1cdce8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.843618 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.843772 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.843819 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.843883 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.844003 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.844066 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.844408 4764 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.844489 4764 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:26 crc kubenswrapper[4764]: I0127 07:20:26.844516 4764 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.205057 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.205678 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6c0f57a-a21d-44db-b8f8-88a40a1cdce8","Type":"ContainerDied","Data":"c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b"} Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.205820 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88ae431aaea1ec1212fba909045e659c98a9f65d901455da63b34c2a153e64b" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.223925 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.225566 4764 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252" exitCode=0 Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.225722 4764 scope.go:117] "RemoveContainer" containerID="fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.226248 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.235114 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.235960 4764 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.250880 4764 scope.go:117] "RemoveContainer" containerID="b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.262631 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.263085 4764 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.281047 4764 scope.go:117] "RemoveContainer" containerID="0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.300710 4764 scope.go:117] "RemoveContainer" containerID="73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.320258 4764 scope.go:117] "RemoveContainer" containerID="549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.341316 4764 scope.go:117] "RemoveContainer" containerID="ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.363534 4764 scope.go:117] "RemoveContainer" containerID="fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.364125 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\": container with ID starting with fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5 not found: ID does not exist" containerID="fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.364203 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5"} err="failed to get container status \"fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\": rpc error: code = NotFound desc = could not find container \"fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5\": container with ID starting with fd287a03705ab2c97c0ead82d3cbaff0648f2f22f3b4608f9c7977a5200cbed5 not found: ID does not exist" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.364237 4764 scope.go:117] "RemoveContainer" containerID="b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.365117 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\": container with ID starting with b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46 not found: ID does not exist" containerID="b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.365238 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46"} err="failed to get container status \"b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\": rpc error: code = NotFound desc = could not find container \"b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46\": container with ID starting with b2079fb47e5b424834c3f9db247f48efb0b1de4ccc5d59e83caeeedfb113bc46 not found: ID does not exist" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.365574 4764 scope.go:117] "RemoveContainer" containerID="0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.366029 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\": container with ID starting with 0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14 not found: ID does not exist" containerID="0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.366054 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14"} err="failed to get container status \"0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\": rpc error: code = NotFound desc = could not find container \"0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14\": container with ID starting with 0c1d40aeb21e82fee121465f92666bda28e0a684a2e35694665a1f948939ab14 not found: ID does not exist" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.366070 4764 scope.go:117] "RemoveContainer" containerID="73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.366925 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\": container with ID starting with 73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f not found: ID does not exist" containerID="73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.366953 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f"} err="failed to get container status \"73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\": rpc error: code = NotFound desc = could not find container \"73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f\": container with ID starting with 73e1df77f50074ac163b4df0bf685f04ac373b71afb10ebff5f7efc66752e34f not found: ID does not exist" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.366969 4764 scope.go:117] "RemoveContainer" containerID="549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.367261 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\": container with ID starting with 549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252 not found: ID does not exist" containerID="549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.367280 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252"} err="failed to get container status \"549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\": rpc error: code = NotFound desc = could not find container \"549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252\": container with ID starting with 549749be884198ecaa5dc4409387adb045613b4797e5eca5a4b663ac92bfc252 not found: ID does not exist" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.367292 4764 scope.go:117] "RemoveContainer" containerID="ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661" Jan 27 07:20:27 crc kubenswrapper[4764]: E0127 07:20:27.367681 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\": container with ID starting with ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661 not found: ID does not exist" containerID="ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661" Jan 27 07:20:27 crc kubenswrapper[4764]: I0127 07:20:27.367702 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661"} err="failed to get container status \"ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\": rpc error: code = NotFound desc = could not find container \"ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661\": container with ID starting with ac86a3e1766d9b90b119f9641afa83e595180c82875b87754c9907d02f2a9661 not found: ID does not exist" Jan 27 07:20:28 crc kubenswrapper[4764]: I0127 07:20:28.440714 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:28 crc kubenswrapper[4764]: I0127 07:20:28.441828 4764 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:28 crc kubenswrapper[4764]: I0127 07:20:28.449655 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.130901 4764 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.131390 4764 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.131790 4764 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.132120 4764 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.132399 4764 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:29 crc kubenswrapper[4764]: I0127 07:20:29.132467 4764 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.132804 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.260361 4764 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:29 crc kubenswrapper[4764]: I0127 07:20:29.261007 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.298619 4764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e85723ceae3e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:20:29.29797629 +0000 UTC m=+241.893599026,LastTimestamp:2026-01-27 07:20:29.29797629 +0000 UTC m=+241.893599026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.334354 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Jan 27 07:20:29 crc kubenswrapper[4764]: E0127 07:20:29.735832 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Jan 27 07:20:30 crc kubenswrapper[4764]: I0127 07:20:30.246835 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6109d8be8f7fb3005d959372a847e043b79dd345da04387102ffdfe825f72660"} Jan 27 07:20:30 crc kubenswrapper[4764]: E0127 07:20:30.537736 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Jan 27 07:20:31 crc kubenswrapper[4764]: E0127 07:20:31.160870 4764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e85723ceae3e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:20:29.29797629 +0000 UTC m=+241.893599026,LastTimestamp:2026-01-27 07:20:29.29797629 +0000 UTC m=+241.893599026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:20:32 crc kubenswrapper[4764]: E0127 07:20:32.139800 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.287310 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7633a53f91f981b2d1024be9e6b1ac67f9ec1db11f82e700591be0b48a47ab05"} Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.288237 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:35 crc kubenswrapper[4764]: E0127 07:20:35.289385 4764 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:35 crc kubenswrapper[4764]: E0127 07:20:35.340743 4764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="6.4s" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.437740 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.439067 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.466034 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.466084 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:35 crc kubenswrapper[4764]: E0127 07:20:35.466903 4764 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.467947 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:35 crc kubenswrapper[4764]: I0127 07:20:35.664507 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" containerID="cri-o://c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71" gracePeriod=15 Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.036320 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.036908 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.037249 4764 status_manager.go:851] "Failed to get status for pod" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b2jhk\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182085 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182159 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182240 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182283 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182303 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c6nn\" (UniqueName: \"kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182325 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182388 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182492 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182547 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182576 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182595 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182618 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182676 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.182942 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.183571 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.183684 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template\") pod \"b049cfac-c306-472f-ace1-bbbb32baf704\" (UID: \"b049cfac-c306-472f-ace1-bbbb32baf704\") " Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184109 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184217 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184368 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184430 4764 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184601 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184691 4764 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b049cfac-c306-472f-ace1-bbbb32baf704-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.184777 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.189415 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.189623 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn" (OuterVolumeSpecName: "kube-api-access-4c6nn") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "kube-api-access-4c6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190033 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190053 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190089 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190279 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190525 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190701 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.190967 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b049cfac-c306-472f-ace1-bbbb32baf704" (UID: "b049cfac-c306-472f-ace1-bbbb32baf704"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286755 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286818 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c6nn\" (UniqueName: \"kubernetes.io/projected/b049cfac-c306-472f-ace1-bbbb32baf704-kube-api-access-4c6nn\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286842 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286869 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286925 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286954 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286977 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.286998 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.287017 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.287041 4764 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b049cfac-c306-472f-ace1-bbbb32baf704-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.297232 4764 generic.go:334] "Generic (PLEG): container finished" podID="b049cfac-c306-472f-ace1-bbbb32baf704" containerID="c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71" exitCode=0 Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.297315 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" event={"ID":"b049cfac-c306-472f-ace1-bbbb32baf704","Type":"ContainerDied","Data":"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71"} Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.297338 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.297366 4764 scope.go:117] "RemoveContainer" containerID="c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.297352 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" event={"ID":"b049cfac-c306-472f-ace1-bbbb32baf704","Type":"ContainerDied","Data":"20141ffeb3a7e73cdb5b48ab2193faae3943161cd0b6d81971a7f52fa923e8c5"} Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.298746 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.299205 4764 status_manager.go:851] "Failed to get status for pod" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b2jhk\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.300278 4764 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5aac4b9be4eb9dbda66042eba36ab1a5d011eb76822823ba34d8a42ce368405d" exitCode=0 Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.300357 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5aac4b9be4eb9dbda66042eba36ab1a5d011eb76822823ba34d8a42ce368405d"} Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.300402 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0dd3f058b384732a4e9452d79fa8472cd615bfb28f71ca9903c3c4192439498"} Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.300999 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.301029 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.301144 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: E0127 07:20:36.301350 4764 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:36 crc kubenswrapper[4764]: E0127 07:20:36.301370 4764 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.301402 4764 status_manager.go:851] "Failed to get status for pod" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b2jhk\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.316673 4764 status_manager.go:851] "Failed to get status for pod" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.317237 4764 status_manager.go:851] "Failed to get status for pod" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" pod="openshift-authentication/oauth-openshift-558db77b4-b2jhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b2jhk\": dial tcp 38.102.83.73:6443: connect: connection refused" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.317284 4764 scope.go:117] "RemoveContainer" containerID="c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71" Jan 27 07:20:36 crc kubenswrapper[4764]: E0127 07:20:36.317745 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71\": container with ID starting with c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71 not found: ID does not exist" containerID="c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71" Jan 27 07:20:36 crc kubenswrapper[4764]: I0127 07:20:36.317780 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71"} err="failed to get container status \"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71\": rpc error: code = NotFound desc = could not find container \"c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71\": container with ID starting with c29314756457fbc2aa090c90901d9a5dc8be0a3fc258bc4acc03fd377daf9c71 not found: ID does not exist" Jan 27 07:20:37 crc kubenswrapper[4764]: I0127 07:20:37.321543 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78b6817b4b402917d4ccc40c1afcd888cd62f8a4e86e3567b679aef2ce213902"} Jan 27 07:20:37 crc kubenswrapper[4764]: I0127 07:20:37.321586 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"197c5e8b20a0943d9ccebd51d29f8cd5e50a0585520d296df78b3f25f466173a"} Jan 27 07:20:37 crc kubenswrapper[4764]: I0127 07:20:37.321598 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ace36da1dd9f9576a8792b9c043ce01fbd2bc7b64f4f7d3980de3908324f8eb"} Jan 27 07:20:38 crc kubenswrapper[4764]: I0127 07:20:38.330228 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fbec7e63add90cf51a77cd4fb63a658d7f4191894758dca5afb2cb8de48bd039"} Jan 27 07:20:38 crc kubenswrapper[4764]: I0127 07:20:38.330668 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a7553bd6b9544ad5ca8b9a24784e303b21c95bc9f9d6ec7c00260c26baa9b14"} Jan 27 07:20:38 crc kubenswrapper[4764]: I0127 07:20:38.330732 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:38 crc kubenswrapper[4764]: I0127 07:20:38.330753 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:38 crc kubenswrapper[4764]: I0127 07:20:38.330949 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:39 crc kubenswrapper[4764]: I0127 07:20:39.340147 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:20:39 crc kubenswrapper[4764]: I0127 07:20:39.340222 4764 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a" exitCode=1 Jan 27 07:20:39 crc kubenswrapper[4764]: I0127 07:20:39.340257 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a"} Jan 27 07:20:39 crc kubenswrapper[4764]: I0127 07:20:39.340787 4764 scope.go:117] "RemoveContainer" containerID="7ba87d61941ed349de405719142e8ec535ac10bd44b4f5593fc698a4bfcbcd0a" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.357650 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.358139 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a29349f8ceacdb29e1fa100828db9d6c76db702b2e0ca3c74f9d31ba7db1455"} Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.468598 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.468673 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.478557 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.926968 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:20:40 crc kubenswrapper[4764]: I0127 07:20:40.934589 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:20:41 crc kubenswrapper[4764]: I0127 07:20:41.364517 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:20:43 crc kubenswrapper[4764]: I0127 07:20:43.345366 4764 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:43 crc kubenswrapper[4764]: I0127 07:20:43.377206 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:43 crc kubenswrapper[4764]: I0127 07:20:43.377258 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:43 crc kubenswrapper[4764]: I0127 07:20:43.382367 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:20:43 crc kubenswrapper[4764]: I0127 07:20:43.460429 4764 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="914425dd-c2df-44e4-8d16-b758948ec7e0" Jan 27 07:20:44 crc kubenswrapper[4764]: I0127 07:20:44.385387 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:44 crc kubenswrapper[4764]: I0127 07:20:44.385906 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:20:44 crc kubenswrapper[4764]: I0127 07:20:44.392980 4764 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="914425dd-c2df-44e4-8d16-b758948ec7e0" Jan 27 07:20:49 crc kubenswrapper[4764]: I0127 07:20:49.846088 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:20:50 crc kubenswrapper[4764]: I0127 07:20:50.129111 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:20:50 crc kubenswrapper[4764]: I0127 07:20:50.146907 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:20:50 crc kubenswrapper[4764]: I0127 07:20:50.523325 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:20:51 crc kubenswrapper[4764]: I0127 07:20:51.233642 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:20:52 crc kubenswrapper[4764]: I0127 07:20:52.909749 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:20:54 crc kubenswrapper[4764]: I0127 07:20:54.594814 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:20:54 crc kubenswrapper[4764]: I0127 07:20:54.874234 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:20:55 crc kubenswrapper[4764]: I0127 07:20:55.881054 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:20:55 crc kubenswrapper[4764]: I0127 07:20:55.883266 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:20:56 crc kubenswrapper[4764]: I0127 07:20:56.347790 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:20:56 crc kubenswrapper[4764]: I0127 07:20:56.761465 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:20:56 crc kubenswrapper[4764]: I0127 07:20:56.845847 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.065137 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.134285 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.180431 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.260463 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.269693 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.289320 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.299744 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.376430 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.377639 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.535275 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.551717 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.956577 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.959303 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.966812 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:20:57 crc kubenswrapper[4764]: I0127 07:20:57.978629 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.032379 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.147119 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.204415 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.210911 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.435843 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.561648 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.616090 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.655841 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.702222 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.806688 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.899641 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.921423 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:20:58 crc kubenswrapper[4764]: I0127 07:20:58.943729 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.043527 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.072402 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.149293 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.149750 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.198115 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.318285 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.363863 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.494552 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.517775 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.568747 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.613022 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.618337 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.653499 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.804395 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.894899 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:20:59 crc kubenswrapper[4764]: I0127 07:20:59.897950 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.018488 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.143509 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.167305 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.215276 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.282302 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.432112 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.490704 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.536078 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.598348 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.672405 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.674021 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.676717 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.849375 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.880373 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.904498 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:21:00 crc kubenswrapper[4764]: I0127 07:21:00.958179 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.000822 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.148046 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.204062 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.254595 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.297001 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.404303 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.425337 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.437675 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.537924 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.604300 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.607898 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.667769 4764 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.677204 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.813516 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.837029 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.867699 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.869297 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.883210 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.906353 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:21:01 crc kubenswrapper[4764]: I0127 07:21:01.933156 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.001300 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.045163 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.048644 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.052964 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.113577 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.128007 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.142266 4764 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.151671 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.227979 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.260364 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.262478 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.277234 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.289206 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.366020 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.372378 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.714482 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.747256 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.923490 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:21:02 crc kubenswrapper[4764]: I0127 07:21:02.974773 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.020946 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.096128 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.097757 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.109872 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.119599 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.222270 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.288801 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.312975 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.408303 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.449772 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.529850 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.572557 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.586240 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.692737 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.781541 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.867265 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.875458 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.928408 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:21:03 crc kubenswrapper[4764]: I0127 07:21:03.940496 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.076516 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.127189 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.257164 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.293291 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.351030 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.370351 4764 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.379552 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.382638 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.411248 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.415943 4764 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.421871 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-b2jhk"] Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.421993 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-574dcf5686-5kwc7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:21:04 crc kubenswrapper[4764]: E0127 07:21:04.422354 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" containerName="installer" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422379 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" containerName="installer" Jan 27 07:21:04 crc kubenswrapper[4764]: E0127 07:21:04.422390 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422399 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422561 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" containerName="oauth-openshift" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422590 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c0f57a-a21d-44db-b8f8-88a40a1cdce8" containerName="installer" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422553 4764 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.422859 4764 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9dde2e8c-c7c1-49b4-8c37-12e2f0471e4c" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.423152 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.426823 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.426938 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.427278 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.427404 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.427933 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.429139 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.430405 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.430408 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.430712 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.431611 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.431872 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.432128 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.437120 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.438387 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.438495 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.446842 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.452364 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b049cfac-c306-472f-ace1-bbbb32baf704" path="/var/lib/kubelet/pods/b049cfac-c306-472f-ace1-bbbb32baf704/volumes" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.478059 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.478017468 podStartE2EDuration="21.478017468s" podCreationTimestamp="2026-01-27 07:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:04.471489479 +0000 UTC m=+277.067112025" watchObservedRunningTime="2026-01-27 07:21:04.478017468 +0000 UTC m=+277.073639994" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.485855 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.547689 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.547866 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.547913 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7vb\" (UniqueName: \"kubernetes.io/projected/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-kube-api-access-fj7vb\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.547943 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.547976 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548038 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548075 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-dir\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548102 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548226 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548267 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548335 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548458 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-policies\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548586 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.548727 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650594 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650668 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650726 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650757 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650780 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650802 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7vb\" (UniqueName: \"kubernetes.io/projected/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-kube-api-access-fj7vb\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650828 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650856 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-dir\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650880 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650904 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650927 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650952 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.650998 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-policies\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.651029 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.652236 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.652359 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-dir\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.652836 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.653736 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.654427 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-audit-policies\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.659343 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.659691 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.659725 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.659976 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.660186 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.661912 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.663397 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.670196 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.672890 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.674809 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7vb\" (UniqueName: \"kubernetes.io/projected/6425321e-cfe9-473a-b5ed-ca69b4ea9ff8-kube-api-access-fj7vb\") pod \"oauth-openshift-574dcf5686-5kwc7\" (UID: \"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8\") " pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.718337 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.752616 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.794185 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.840085 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.853347 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.957770 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:21:04 crc kubenswrapper[4764]: I0127 07:21:04.975526 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574dcf5686-5kwc7"] Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:04.999889 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.002149 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.079776 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.195804 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.365359 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.411717 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.436231 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.470573 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.482391 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.523521 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.524581 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" event={"ID":"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8","Type":"ContainerStarted","Data":"76f51009046fcdee09777cdce598931a853da0f1b35f0652d8143f1dbb312bd3"} Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.524685 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" event={"ID":"6425321e-cfe9-473a-b5ed-ca69b4ea9ff8","Type":"ContainerStarted","Data":"ec163c6f154f536c3a53f000e9d1392b5022c03be064ce2ddf7b0a56a067aa65"} Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.568963 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.583765 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.585787 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.593756 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.627259 4764 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.707231 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.785353 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.814895 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:21:05 crc kubenswrapper[4764]: I0127 07:21:05.983224 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.001859 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.033714 4764 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.034045 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7633a53f91f981b2d1024be9e6b1ac67f9ec1db11f82e700591be0b48a47ab05" gracePeriod=5 Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.152804 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.181408 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.207199 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.219201 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.229174 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.246963 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.270255 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.396953 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.398849 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.514633 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.531201 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.537138 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.565799 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-574dcf5686-5kwc7" podStartSLOduration=56.565768175 podStartE2EDuration="56.565768175s" podCreationTimestamp="2026-01-27 07:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:05.555118496 +0000 UTC m=+278.150741022" watchObservedRunningTime="2026-01-27 07:21:06.565768175 +0000 UTC m=+279.161390701" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.711123 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.825251 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.951114 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.960624 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:21:06 crc kubenswrapper[4764]: I0127 07:21:06.981699 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.034539 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.092958 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.125126 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.131611 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.164712 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.248666 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.340051 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.371755 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.391645 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.409517 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.438525 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.450795 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.488001 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.493638 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.593491 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.691199 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.799624 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.832113 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.889349 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:21:07 crc kubenswrapper[4764]: I0127 07:21:07.918204 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.031694 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.111784 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.117079 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.323972 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.609519 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.714498 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.799544 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:21:08 crc kubenswrapper[4764]: I0127 07:21:08.843628 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.079082 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.155044 4764 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.240967 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.366868 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.427193 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.665238 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.742575 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:21:09 crc kubenswrapper[4764]: I0127 07:21:09.789841 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.009915 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.013219 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.053309 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.297945 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.560834 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.702065 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:21:10 crc kubenswrapper[4764]: I0127 07:21:10.962880 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.068482 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.364973 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.423235 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.486637 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.577931 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.578042 4764 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7633a53f91f981b2d1024be9e6b1ac67f9ec1db11f82e700591be0b48a47ab05" exitCode=137 Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.651095 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.651300 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.691337 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.706213 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.764300 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.764926 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765030 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765141 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765226 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765029 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765096 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765168 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765264 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765758 4764 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765837 4764 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765908 4764 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.765973 4764 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.776000 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.867809 4764 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.889208 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:21:11 crc kubenswrapper[4764]: I0127 07:21:11.917068 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.038998 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.447368 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.587992 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.588123 4764 scope.go:117] "RemoveContainer" containerID="7633a53f91f981b2d1024be9e6b1ac67f9ec1db11f82e700591be0b48a47ab05" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.588229 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.831880 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:21:12 crc kubenswrapper[4764]: I0127 07:21:12.889759 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:21:14 crc kubenswrapper[4764]: I0127 07:21:14.023402 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.331574 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.332852 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pblvj" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="registry-server" containerID="cri-o://bf165a0fb2e5cfb0502cae0f6a6cc971a95cfd61357737e10ef721c441436e5f" gracePeriod=30 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.347308 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.347755 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlsvp" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="registry-server" containerID="cri-o://3624d0b4a219c0a4f6af9623f18b4107eda114182ec006f2cdf0f197a28faaa3" gracePeriod=30 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.369672 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.370036 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" containerID="cri-o://4b476375c41debd026d76b0b2a60f660f2e3b2ae841c7cab10a3a25328f639a4" gracePeriod=30 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.383256 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.383605 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjcjk" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="registry-server" containerID="cri-o://8b90b1d93745177516bc8d6de9ca75b218776061e967a4eade74602213c892b0" gracePeriod=30 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.390364 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6bmt"] Jan 27 07:21:18 crc kubenswrapper[4764]: E0127 07:21:18.390667 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.390686 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.390780 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.391205 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.396124 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.396338 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddpln" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="registry-server" containerID="cri-o://4c496ca6c34dd6e8af4a9efb1dca002ad2f883be05211461029dd6275ab86a51" gracePeriod=30 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.400856 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6bmt"] Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.466510 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7t8\" (UniqueName: \"kubernetes.io/projected/bad48f2b-ed0c-4320-b601-5851008a6ae3-kube-api-access-hx7t8\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.466583 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.466726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.568306 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx7t8\" (UniqueName: \"kubernetes.io/projected/bad48f2b-ed0c-4320-b601-5851008a6ae3-kube-api-access-hx7t8\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.568356 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.568417 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.569913 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.577022 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bad48f2b-ed0c-4320-b601-5851008a6ae3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.587818 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx7t8\" (UniqueName: \"kubernetes.io/projected/bad48f2b-ed0c-4320-b601-5851008a6ae3-kube-api-access-hx7t8\") pod \"marketplace-operator-79b997595-k6bmt\" (UID: \"bad48f2b-ed0c-4320-b601-5851008a6ae3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.646519 4764 generic.go:334] "Generic (PLEG): container finished" podID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerID="bf165a0fb2e5cfb0502cae0f6a6cc971a95cfd61357737e10ef721c441436e5f" exitCode=0 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.646605 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerDied","Data":"bf165a0fb2e5cfb0502cae0f6a6cc971a95cfd61357737e10ef721c441436e5f"} Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.650947 4764 generic.go:334] "Generic (PLEG): container finished" podID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerID="3624d0b4a219c0a4f6af9623f18b4107eda114182ec006f2cdf0f197a28faaa3" exitCode=0 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.650995 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerDied","Data":"3624d0b4a219c0a4f6af9623f18b4107eda114182ec006f2cdf0f197a28faaa3"} Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.653621 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb5a7ef-0513-4f04-8232-9490e959628d" containerID="8b90b1d93745177516bc8d6de9ca75b218776061e967a4eade74602213c892b0" exitCode=0 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.653748 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerDied","Data":"8b90b1d93745177516bc8d6de9ca75b218776061e967a4eade74602213c892b0"} Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.656128 4764 generic.go:334] "Generic (PLEG): container finished" podID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerID="4c496ca6c34dd6e8af4a9efb1dca002ad2f883be05211461029dd6275ab86a51" exitCode=0 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.656163 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerDied","Data":"4c496ca6c34dd6e8af4a9efb1dca002ad2f883be05211461029dd6275ab86a51"} Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.661475 4764 generic.go:334] "Generic (PLEG): container finished" podID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerID="4b476375c41debd026d76b0b2a60f660f2e3b2ae841c7cab10a3a25328f639a4" exitCode=0 Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.661530 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" event={"ID":"591d7bc8-2161-4f33-bf8d-38d89380509f","Type":"ContainerDied","Data":"4b476375c41debd026d76b0b2a60f660f2e3b2ae841c7cab10a3a25328f639a4"} Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.779409 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.787328 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.825798 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.838560 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.862539 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.876322 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvzdz\" (UniqueName: \"kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz\") pod \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.876486 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvbw\" (UniqueName: \"kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw\") pod \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.876573 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content\") pod \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877034 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content\") pod \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877156 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xn7\" (UniqueName: \"kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7\") pod \"591d7bc8-2161-4f33-bf8d-38d89380509f\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877221 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") pod \"591d7bc8-2161-4f33-bf8d-38d89380509f\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877254 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities\") pod \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\" (UID: \"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877321 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") pod \"591d7bc8-2161-4f33-bf8d-38d89380509f\" (UID: \"591d7bc8-2161-4f33-bf8d-38d89380509f\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.877491 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities\") pod \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\" (UID: \"6ab6b260-c1b8-49ff-aa32-54abd16f0b66\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.882557 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "591d7bc8-2161-4f33-bf8d-38d89380509f" (UID: "591d7bc8-2161-4f33-bf8d-38d89380509f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.883115 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz" (OuterVolumeSpecName: "kube-api-access-jvzdz") pod "6ab6b260-c1b8-49ff-aa32-54abd16f0b66" (UID: "6ab6b260-c1b8-49ff-aa32-54abd16f0b66"). InnerVolumeSpecName "kube-api-access-jvzdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.884635 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities" (OuterVolumeSpecName: "utilities") pod "6ab6b260-c1b8-49ff-aa32-54abd16f0b66" (UID: "6ab6b260-c1b8-49ff-aa32-54abd16f0b66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.890657 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7" (OuterVolumeSpecName: "kube-api-access-z2xn7") pod "591d7bc8-2161-4f33-bf8d-38d89380509f" (UID: "591d7bc8-2161-4f33-bf8d-38d89380509f"). InnerVolumeSpecName "kube-api-access-z2xn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.891497 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw" (OuterVolumeSpecName: "kube-api-access-xzvbw") pod "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" (UID: "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1"). InnerVolumeSpecName "kube-api-access-xzvbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.891971 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities" (OuterVolumeSpecName: "utilities") pod "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" (UID: "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.895043 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "591d7bc8-2161-4f33-bf8d-38d89380509f" (UID: "591d7bc8-2161-4f33-bf8d-38d89380509f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.941772 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ab6b260-c1b8-49ff-aa32-54abd16f0b66" (UID: "6ab6b260-c1b8-49ff-aa32-54abd16f0b66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.966054 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" (UID: "fa217f31-3e10-46b8-a8f0-7ebe2a663bf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.983565 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj44t\" (UniqueName: \"kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t\") pod \"feb5a7ef-0513-4f04-8232-9490e959628d\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.983664 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities\") pod \"feb5a7ef-0513-4f04-8232-9490e959628d\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.983755 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content\") pod \"feb5a7ef-0513-4f04-8232-9490e959628d\" (UID: \"feb5a7ef-0513-4f04-8232-9490e959628d\") " Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.984609 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities" (OuterVolumeSpecName: "utilities") pod "feb5a7ef-0513-4f04-8232-9490e959628d" (UID: "feb5a7ef-0513-4f04-8232-9490e959628d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985286 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985346 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xn7\" (UniqueName: \"kubernetes.io/projected/591d7bc8-2161-4f33-bf8d-38d89380509f-kube-api-access-z2xn7\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985362 4764 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985375 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985386 4764 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/591d7bc8-2161-4f33-bf8d-38d89380509f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985398 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985889 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985905 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvzdz\" (UniqueName: \"kubernetes.io/projected/6ab6b260-c1b8-49ff-aa32-54abd16f0b66-kube-api-access-jvzdz\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985915 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvbw\" (UniqueName: \"kubernetes.io/projected/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-kube-api-access-xzvbw\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.985924 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:18 crc kubenswrapper[4764]: I0127 07:21:18.991261 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t" (OuterVolumeSpecName: "kube-api-access-gj44t") pod "feb5a7ef-0513-4f04-8232-9490e959628d" (UID: "feb5a7ef-0513-4f04-8232-9490e959628d"). InnerVolumeSpecName "kube-api-access-gj44t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.016335 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feb5a7ef-0513-4f04-8232-9490e959628d" (UID: "feb5a7ef-0513-4f04-8232-9490e959628d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.028958 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6bmt"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.087428 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb5a7ef-0513-4f04-8232-9490e959628d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.088397 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj44t\" (UniqueName: \"kubernetes.io/projected/feb5a7ef-0513-4f04-8232-9490e959628d-kube-api-access-gj44t\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.230377 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.291613 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities\") pod \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.291747 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content\") pod \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.291812 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ljtm\" (UniqueName: \"kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm\") pod \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\" (UID: \"ba301678-dac1-45dd-a1fc-6db20a2f38aa\") " Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.292637 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities" (OuterVolumeSpecName: "utilities") pod "ba301678-dac1-45dd-a1fc-6db20a2f38aa" (UID: "ba301678-dac1-45dd-a1fc-6db20a2f38aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.299340 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm" (OuterVolumeSpecName: "kube-api-access-6ljtm") pod "ba301678-dac1-45dd-a1fc-6db20a2f38aa" (UID: "ba301678-dac1-45dd-a1fc-6db20a2f38aa"). InnerVolumeSpecName "kube-api-access-6ljtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.393025 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ljtm\" (UniqueName: \"kubernetes.io/projected/ba301678-dac1-45dd-a1fc-6db20a2f38aa-kube-api-access-6ljtm\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.393074 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.420953 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba301678-dac1-45dd-a1fc-6db20a2f38aa" (UID: "ba301678-dac1-45dd-a1fc-6db20a2f38aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.496187 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba301678-dac1-45dd-a1fc-6db20a2f38aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.671058 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcjk" event={"ID":"feb5a7ef-0513-4f04-8232-9490e959628d","Type":"ContainerDied","Data":"0f827a78cafa65bd18bd803bdb545b72f053672fe7650b2417440fb0640bf1c5"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.671121 4764 scope.go:117] "RemoveContainer" containerID="8b90b1d93745177516bc8d6de9ca75b218776061e967a4eade74602213c892b0" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.671155 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcjk" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.675193 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpln" event={"ID":"ba301678-dac1-45dd-a1fc-6db20a2f38aa","Type":"ContainerDied","Data":"ccb6460b230fd65f02204b14d959a74796915fc31749c7862334abad232e8ddd"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.675240 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpln" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.677547 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.677597 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hmmqc" event={"ID":"591d7bc8-2161-4f33-bf8d-38d89380509f","Type":"ContainerDied","Data":"170bd0ef0c85f53767cc066b98d4b7bd9086ba8a26c3b4915d6461e5bd63197a"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.682244 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pblvj" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.682260 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pblvj" event={"ID":"6ab6b260-c1b8-49ff-aa32-54abd16f0b66","Type":"ContainerDied","Data":"3197ffbf0d9be800f619cae7c34776f376fdc64e712d073d6be0b0945c3a63e3"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.685206 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlsvp" event={"ID":"fa217f31-3e10-46b8-a8f0-7ebe2a663bf1","Type":"ContainerDied","Data":"107efd72a335cb3273fa0e53add39bf7225c4780d22750c4cfa2abd50ea9de2f"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.685338 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlsvp" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.687961 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" event={"ID":"bad48f2b-ed0c-4320-b601-5851008a6ae3","Type":"ContainerStarted","Data":"11cda7a1645577cc47159bd3ae7baf562cb010c97d53b059d2c20f92e9d7c510"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.688037 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" event={"ID":"bad48f2b-ed0c-4320-b601-5851008a6ae3","Type":"ContainerStarted","Data":"19427daeb868d1d7b0ea19826549c3ecf136081cc765772130aaf6780fef5bcf"} Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.688347 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.697215 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.708199 4764 scope.go:117] "RemoveContainer" containerID="095c7b7e6f43f2c3b16bde80679e58c478d2b5ab164dd4ec31ee1b86adb5055f" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.726858 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k6bmt" podStartSLOduration=1.7268226420000001 podStartE2EDuration="1.726822642s" podCreationTimestamp="2026-01-27 07:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:19.714210676 +0000 UTC m=+292.309833252" watchObservedRunningTime="2026-01-27 07:21:19.726822642 +0000 UTC m=+292.322445178" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.742969 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.747988 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddpln"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.759639 4764 scope.go:117] "RemoveContainer" containerID="329ad7390da28ee8b509313d0ec61d394834a11156fbd0bb2c38ee7c8bfaafbf" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.780787 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.790784 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hmmqc"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.795173 4764 scope.go:117] "RemoveContainer" containerID="4c496ca6c34dd6e8af4a9efb1dca002ad2f883be05211461029dd6275ab86a51" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.795922 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.813409 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlsvp"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.823855 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.830028 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pblvj"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.831278 4764 scope.go:117] "RemoveContainer" containerID="5665df7008dff6624aa8c5c8e4c97f3106b7040f49b590d5fa06381287a1d603" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.834681 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.838786 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcjk"] Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.861148 4764 scope.go:117] "RemoveContainer" containerID="e42158ba9b0e0cb93922c3a48440f097db35d5f82e7b8dc5fef663a8fa357b96" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.877577 4764 scope.go:117] "RemoveContainer" containerID="4b476375c41debd026d76b0b2a60f660f2e3b2ae841c7cab10a3a25328f639a4" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.893288 4764 scope.go:117] "RemoveContainer" containerID="bf165a0fb2e5cfb0502cae0f6a6cc971a95cfd61357737e10ef721c441436e5f" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.913169 4764 scope.go:117] "RemoveContainer" containerID="15f302c91a9569a605bb47ebc6a917497ea0f6c4827ab32517783b0cb003a7d3" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.931373 4764 scope.go:117] "RemoveContainer" containerID="c3d7c29efb61f8d31448433be8fd205c568c5f80cee37727249fd5a8727098b2" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.945825 4764 scope.go:117] "RemoveContainer" containerID="3624d0b4a219c0a4f6af9623f18b4107eda114182ec006f2cdf0f197a28faaa3" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.963190 4764 scope.go:117] "RemoveContainer" containerID="acf50b31666bd7cd33957749b5554e0a19a17b418450b9698a1022a653449e3b" Jan 27 07:21:19 crc kubenswrapper[4764]: I0127 07:21:19.980201 4764 scope.go:117] "RemoveContainer" containerID="52d309afb5c00cb13935d751526b9d93c4db01a4540f7b5cf98a556d53b2e89b" Jan 27 07:21:20 crc kubenswrapper[4764]: I0127 07:21:20.449611 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" path="/var/lib/kubelet/pods/591d7bc8-2161-4f33-bf8d-38d89380509f/volumes" Jan 27 07:21:20 crc kubenswrapper[4764]: I0127 07:21:20.450800 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" path="/var/lib/kubelet/pods/6ab6b260-c1b8-49ff-aa32-54abd16f0b66/volumes" Jan 27 07:21:20 crc kubenswrapper[4764]: I0127 07:21:20.451694 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" path="/var/lib/kubelet/pods/ba301678-dac1-45dd-a1fc-6db20a2f38aa/volumes" Jan 27 07:21:20 crc kubenswrapper[4764]: I0127 07:21:20.453179 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" path="/var/lib/kubelet/pods/fa217f31-3e10-46b8-a8f0-7ebe2a663bf1/volumes" Jan 27 07:21:20 crc kubenswrapper[4764]: I0127 07:21:20.454070 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" path="/var/lib/kubelet/pods/feb5a7ef-0513-4f04-8232-9490e959628d/volumes" Jan 27 07:21:28 crc kubenswrapper[4764]: I0127 07:21:28.220581 4764 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.479145 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.479729 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerName="controller-manager" containerID="cri-o://92b624c7b38c8fcf45256fa12888b9f5b5f1e41862d9929a8f07a483bc6b5abe" gracePeriod=30 Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.581131 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.581979 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" podUID="a7073743-ec8e-48d4-a853-f1b6e10343e4" containerName="route-controller-manager" containerID="cri-o://8fab584a70ff2b44452f248fab58eef47e52755b5dcd5be3f11b78385579558c" gracePeriod=30 Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.759655 4764 generic.go:334] "Generic (PLEG): container finished" podID="a7073743-ec8e-48d4-a853-f1b6e10343e4" containerID="8fab584a70ff2b44452f248fab58eef47e52755b5dcd5be3f11b78385579558c" exitCode=0 Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.759911 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" event={"ID":"a7073743-ec8e-48d4-a853-f1b6e10343e4","Type":"ContainerDied","Data":"8fab584a70ff2b44452f248fab58eef47e52755b5dcd5be3f11b78385579558c"} Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.762953 4764 generic.go:334] "Generic (PLEG): container finished" podID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerID="92b624c7b38c8fcf45256fa12888b9f5b5f1e41862d9929a8f07a483bc6b5abe" exitCode=0 Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.762984 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" event={"ID":"2d754c80-9bb1-4cbe-8068-edb1bba00f87","Type":"ContainerDied","Data":"92b624c7b38c8fcf45256fa12888b9f5b5f1e41862d9929a8f07a483bc6b5abe"} Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.858470 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.929683 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.968891 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config\") pod \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.968945 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles\") pod \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.968973 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdf25\" (UniqueName: \"kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25\") pod \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.969015 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca\") pod \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.969041 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert\") pod \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\" (UID: \"2d754c80-9bb1-4cbe-8068-edb1bba00f87\") " Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.973342 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d754c80-9bb1-4cbe-8068-edb1bba00f87" (UID: "2d754c80-9bb1-4cbe-8068-edb1bba00f87"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.973376 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config" (OuterVolumeSpecName: "config") pod "2d754c80-9bb1-4cbe-8068-edb1bba00f87" (UID: "2d754c80-9bb1-4cbe-8068-edb1bba00f87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.973421 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d754c80-9bb1-4cbe-8068-edb1bba00f87" (UID: "2d754c80-9bb1-4cbe-8068-edb1bba00f87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.977007 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d754c80-9bb1-4cbe-8068-edb1bba00f87" (UID: "2d754c80-9bb1-4cbe-8068-edb1bba00f87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:21:29 crc kubenswrapper[4764]: I0127 07:21:29.979926 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25" (OuterVolumeSpecName: "kube-api-access-vdf25") pod "2d754c80-9bb1-4cbe-8068-edb1bba00f87" (UID: "2d754c80-9bb1-4cbe-8068-edb1bba00f87"). InnerVolumeSpecName "kube-api-access-vdf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.069868 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config\") pod \"a7073743-ec8e-48d4-a853-f1b6e10343e4\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.069994 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert\") pod \"a7073743-ec8e-48d4-a853-f1b6e10343e4\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070082 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6hkd\" (UniqueName: \"kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd\") pod \"a7073743-ec8e-48d4-a853-f1b6e10343e4\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070143 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca\") pod \"a7073743-ec8e-48d4-a853-f1b6e10343e4\" (UID: \"a7073743-ec8e-48d4-a853-f1b6e10343e4\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070512 4764 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070544 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdf25\" (UniqueName: \"kubernetes.io/projected/2d754c80-9bb1-4cbe-8068-edb1bba00f87-kube-api-access-vdf25\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070565 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070581 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d754c80-9bb1-4cbe-8068-edb1bba00f87-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.070598 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d754c80-9bb1-4cbe-8068-edb1bba00f87-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.071308 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7073743-ec8e-48d4-a853-f1b6e10343e4" (UID: "a7073743-ec8e-48d4-a853-f1b6e10343e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.071619 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config" (OuterVolumeSpecName: "config") pod "a7073743-ec8e-48d4-a853-f1b6e10343e4" (UID: "a7073743-ec8e-48d4-a853-f1b6e10343e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.074676 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd" (OuterVolumeSpecName: "kube-api-access-n6hkd") pod "a7073743-ec8e-48d4-a853-f1b6e10343e4" (UID: "a7073743-ec8e-48d4-a853-f1b6e10343e4"). InnerVolumeSpecName "kube-api-access-n6hkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.075340 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7073743-ec8e-48d4-a853-f1b6e10343e4" (UID: "a7073743-ec8e-48d4-a853-f1b6e10343e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.171945 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7073743-ec8e-48d4-a853-f1b6e10343e4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.171999 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6hkd\" (UniqueName: \"kubernetes.io/projected/a7073743-ec8e-48d4-a853-f1b6e10343e4-kube-api-access-n6hkd\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.172044 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.172056 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7073743-ec8e-48d4-a853-f1b6e10343e4-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558516 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz"] Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558709 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558721 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558731 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558737 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558746 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558751 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558761 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558768 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558779 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558786 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558794 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558800 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558806 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558812 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558820 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerName="controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558826 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerName="controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558836 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558843 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558854 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558861 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558871 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7073743-ec8e-48d4-a853-f1b6e10343e4" containerName="route-controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558878 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7073743-ec8e-48d4-a853-f1b6e10343e4" containerName="route-controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558888 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558894 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558902 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558908 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="extract-utilities" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558916 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558922 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.558930 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.558935 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="extract-content" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559012 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7073743-ec8e-48d4-a853-f1b6e10343e4" containerName="route-controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559026 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="591d7bc8-2161-4f33-bf8d-38d89380509f" containerName="marketplace-operator" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559034 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab6b260-c1b8-49ff-aa32-54abd16f0b66" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559044 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba301678-dac1-45dd-a1fc-6db20a2f38aa" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559051 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa217f31-3e10-46b8-a8f0-7ebe2a663bf1" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559059 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" containerName="controller-manager" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559065 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb5a7ef-0513-4f04-8232-9490e959628d" containerName="registry-server" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.559412 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.577626 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.578239 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.588826 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.665114 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679000 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-proxy-ca-bundles\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679305 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6r5\" (UniqueName: \"kubernetes.io/projected/32998075-6895-4c1c-b55c-a31373accd6c-kube-api-access-bz6r5\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679385 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-client-ca\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679478 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-config\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679557 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679641 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679741 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32998075-6895-4c1c-b55c-a31373accd6c-serving-cert\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679878 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvbg\" (UniqueName: \"kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.679987 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.712728 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6"] Jan 27 07:21:30 crc kubenswrapper[4764]: E0127 07:21:30.713658 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-4wvbg serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" podUID="8d271c13-440a-49ce-88c5-665138c975b1" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.769201 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.769197 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nttjc" event={"ID":"2d754c80-9bb1-4cbe-8068-edb1bba00f87","Type":"ContainerDied","Data":"c37f508c28e6b8e0541dcf0de3d267524203d2304459ea970b11fa3872a021fb"} Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.769368 4764 scope.go:117] "RemoveContainer" containerID="92b624c7b38c8fcf45256fa12888b9f5b5f1e41862d9929a8f07a483bc6b5abe" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.772396 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.772399 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" event={"ID":"a7073743-ec8e-48d4-a853-f1b6e10343e4","Type":"ContainerDied","Data":"805f7cb793f6944f8d15e5095b0692ff03d089be9d62a1ca1d0841bfd81faa41"} Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.772526 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780672 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-client-ca\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-config\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780727 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780749 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780777 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32998075-6895-4c1c-b55c-a31373accd6c-serving-cert\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780805 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvbg\" (UniqueName: \"kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780835 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780858 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-proxy-ca-bundles\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.780903 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6r5\" (UniqueName: \"kubernetes.io/projected/32998075-6895-4c1c-b55c-a31373accd6c-kube-api-access-bz6r5\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.782384 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-client-ca\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.782397 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.782515 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.783030 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-proxy-ca-bundles\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.783134 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.783357 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32998075-6895-4c1c-b55c-a31373accd6c-config\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.785477 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.785980 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32998075-6895-4c1c-b55c-a31373accd6c-serving-cert\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.788294 4764 scope.go:117] "RemoveContainer" containerID="8fab584a70ff2b44452f248fab58eef47e52755b5dcd5be3f11b78385579558c" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.797816 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6r5\" (UniqueName: \"kubernetes.io/projected/32998075-6895-4c1c-b55c-a31373accd6c-kube-api-access-bz6r5\") pod \"controller-manager-7bd68d5b4d-2m2pz\" (UID: \"32998075-6895-4c1c-b55c-a31373accd6c\") " pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.799457 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.806137 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nttjc"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.807856 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvbg\" (UniqueName: \"kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg\") pod \"route-controller-manager-f4fd89747-g9gf6\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.812008 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.814731 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ghtj6"] Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.874220 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.884784 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca\") pod \"8d271c13-440a-49ce-88c5-665138c975b1\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.884899 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert\") pod \"8d271c13-440a-49ce-88c5-665138c975b1\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.885471 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config\") pod \"8d271c13-440a-49ce-88c5-665138c975b1\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.885612 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvbg\" (UniqueName: \"kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg\") pod \"8d271c13-440a-49ce-88c5-665138c975b1\" (UID: \"8d271c13-440a-49ce-88c5-665138c975b1\") " Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.885495 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d271c13-440a-49ce-88c5-665138c975b1" (UID: "8d271c13-440a-49ce-88c5-665138c975b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.886170 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config" (OuterVolumeSpecName: "config") pod "8d271c13-440a-49ce-88c5-665138c975b1" (UID: "8d271c13-440a-49ce-88c5-665138c975b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.888209 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d271c13-440a-49ce-88c5-665138c975b1" (UID: "8d271c13-440a-49ce-88c5-665138c975b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.888560 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg" (OuterVolumeSpecName: "kube-api-access-4wvbg") pod "8d271c13-440a-49ce-88c5-665138c975b1" (UID: "8d271c13-440a-49ce-88c5-665138c975b1"). InnerVolumeSpecName "kube-api-access-4wvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.987515 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d271c13-440a-49ce-88c5-665138c975b1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.987564 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.987578 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvbg\" (UniqueName: \"kubernetes.io/projected/8d271c13-440a-49ce-88c5-665138c975b1-kube-api-access-4wvbg\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:30 crc kubenswrapper[4764]: I0127 07:21:30.987588 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d271c13-440a-49ce-88c5-665138c975b1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.072248 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz"] Jan 27 07:21:31 crc kubenswrapper[4764]: W0127 07:21:31.078707 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32998075_6895_4c1c_b55c_a31373accd6c.slice/crio-ce61bdffa81adb5c0c0fd15cf3733218684be66ed048d1f682072bb2598c1404 WatchSource:0}: Error finding container ce61bdffa81adb5c0c0fd15cf3733218684be66ed048d1f682072bb2598c1404: Status 404 returned error can't find the container with id ce61bdffa81adb5c0c0fd15cf3733218684be66ed048d1f682072bb2598c1404 Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.780661 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" event={"ID":"32998075-6895-4c1c-b55c-a31373accd6c","Type":"ContainerStarted","Data":"df4c9b19e251657477ca7a8140704c11536fe64a7a52d005dbcc0e9f32ad932f"} Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.780702 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" event={"ID":"32998075-6895-4c1c-b55c-a31373accd6c","Type":"ContainerStarted","Data":"ce61bdffa81adb5c0c0fd15cf3733218684be66ed048d1f682072bb2598c1404"} Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.780882 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.781911 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.786999 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.800502 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd68d5b4d-2m2pz" podStartSLOduration=1.800483928 podStartE2EDuration="1.800483928s" podCreationTimestamp="2026-01-27 07:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:31.800286263 +0000 UTC m=+304.395908789" watchObservedRunningTime="2026-01-27 07:21:31.800483928 +0000 UTC m=+304.396106454" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.896880 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6"] Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.908865 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.909885 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.912594 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.914623 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.914673 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.914623 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.915037 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.915343 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.916108 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:21:31 crc kubenswrapper[4764]: I0127 07:21:31.919150 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-g9gf6"] Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.000535 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.000604 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.000871 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.000921 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cq49\" (UniqueName: \"kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.102627 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.102690 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.102760 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.102790 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cq49\" (UniqueName: \"kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.104763 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.105666 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.121843 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.121864 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cq49\" (UniqueName: \"kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49\") pod \"route-controller-manager-67fd64d77-447ln\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.226532 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.446209 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d754c80-9bb1-4cbe-8068-edb1bba00f87" path="/var/lib/kubelet/pods/2d754c80-9bb1-4cbe-8068-edb1bba00f87/volumes" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.447266 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d271c13-440a-49ce-88c5-665138c975b1" path="/var/lib/kubelet/pods/8d271c13-440a-49ce-88c5-665138c975b1/volumes" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.447622 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7073743-ec8e-48d4-a853-f1b6e10343e4" path="/var/lib/kubelet/pods/a7073743-ec8e-48d4-a853-f1b6e10343e4/volumes" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.466868 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.788504 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" event={"ID":"a23f6090-c437-47e1-85a4-309117e7d37d","Type":"ContainerStarted","Data":"fd8d83a7868640b94c4f37aa22d67cc362ac3b1ed75d389b4629b0e7c1ac5254"} Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.788997 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.789018 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" event={"ID":"a23f6090-c437-47e1-85a4-309117e7d37d","Type":"ContainerStarted","Data":"8c72f70f0cf9f31039b23d529bfddebc15eea2188767b928b28d3ef2ab2f26db"} Jan 27 07:21:32 crc kubenswrapper[4764]: I0127 07:21:32.808796 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" podStartSLOduration=2.808769053 podStartE2EDuration="2.808769053s" podCreationTimestamp="2026-01-27 07:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:32.804791344 +0000 UTC m=+305.400413870" watchObservedRunningTime="2026-01-27 07:21:32.808769053 +0000 UTC m=+305.404391579" Jan 27 07:21:33 crc kubenswrapper[4764]: I0127 07:21:33.241452 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.410674 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgrxd"] Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.412125 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.414471 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.428450 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgrxd"] Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.473411 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-utilities\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.473537 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-catalog-content\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.473563 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhz49\" (UniqueName: \"kubernetes.io/projected/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-kube-api-access-lhz49\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.574846 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-utilities\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.574910 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-catalog-content\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.574931 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhz49\" (UniqueName: \"kubernetes.io/projected/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-kube-api-access-lhz49\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.575408 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-catalog-content\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.575612 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-utilities\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.607595 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhz49\" (UniqueName: \"kubernetes.io/projected/9f9dc94e-9e12-4bcf-8074-d996b8003e3a-kube-api-access-lhz49\") pod \"redhat-marketplace-tgrxd\" (UID: \"9f9dc94e-9e12-4bcf-8074-d996b8003e3a\") " pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.611281 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-swd5n"] Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.612376 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.614644 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.622248 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swd5n"] Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.676053 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-utilities\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.676213 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmn4t\" (UniqueName: \"kubernetes.io/projected/c0516c8e-82e6-457c-97e7-503dbf7fb615-kube-api-access-bmn4t\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.676294 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-catalog-content\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.740484 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.777508 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-catalog-content\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.778101 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-utilities\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.778156 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmn4t\" (UniqueName: \"kubernetes.io/projected/c0516c8e-82e6-457c-97e7-503dbf7fb615-kube-api-access-bmn4t\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.778308 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-catalog-content\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.778719 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0516c8e-82e6-457c-97e7-503dbf7fb615-utilities\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.797277 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmn4t\" (UniqueName: \"kubernetes.io/projected/c0516c8e-82e6-457c-97e7-503dbf7fb615-kube-api-access-bmn4t\") pod \"redhat-operators-swd5n\" (UID: \"c0516c8e-82e6-457c-97e7-503dbf7fb615\") " pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:43 crc kubenswrapper[4764]: I0127 07:21:43.929859 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.134682 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgrxd"] Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.352651 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swd5n"] Jan 27 07:21:44 crc kubenswrapper[4764]: W0127 07:21:44.379021 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0516c8e_82e6_457c_97e7_503dbf7fb615.slice/crio-3f62b6b1338d36861312f534115f82a78cfef27365e5d7496ffef8bbe3cceaa5 WatchSource:0}: Error finding container 3f62b6b1338d36861312f534115f82a78cfef27365e5d7496ffef8bbe3cceaa5: Status 404 returned error can't find the container with id 3f62b6b1338d36861312f534115f82a78cfef27365e5d7496ffef8bbe3cceaa5 Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.877229 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f9dc94e-9e12-4bcf-8074-d996b8003e3a" containerID="dec350fdcc5ff92edc0dde6ab0502e2b7e3fdef1b618a6c51d42a230a99dbb5d" exitCode=0 Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.877330 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgrxd" event={"ID":"9f9dc94e-9e12-4bcf-8074-d996b8003e3a","Type":"ContainerDied","Data":"dec350fdcc5ff92edc0dde6ab0502e2b7e3fdef1b618a6c51d42a230a99dbb5d"} Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.877409 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgrxd" event={"ID":"9f9dc94e-9e12-4bcf-8074-d996b8003e3a","Type":"ContainerStarted","Data":"956d3388fbdb52f1228a2fc6c6895ec83a4532e2f07a79353b09fc3449836fdc"} Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.879378 4764 generic.go:334] "Generic (PLEG): container finished" podID="c0516c8e-82e6-457c-97e7-503dbf7fb615" containerID="ba03b5ce628824a9f0d222e11c6b66d2cdeb50a1f004cffa3081be3822ccedf5" exitCode=0 Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.879495 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swd5n" event={"ID":"c0516c8e-82e6-457c-97e7-503dbf7fb615","Type":"ContainerDied","Data":"ba03b5ce628824a9f0d222e11c6b66d2cdeb50a1f004cffa3081be3822ccedf5"} Jan 27 07:21:44 crc kubenswrapper[4764]: I0127 07:21:44.879536 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swd5n" event={"ID":"c0516c8e-82e6-457c-97e7-503dbf7fb615","Type":"ContainerStarted","Data":"3f62b6b1338d36861312f534115f82a78cfef27365e5d7496ffef8bbe3cceaa5"} Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.213989 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2lsf"] Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.215812 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.218600 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.223737 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2lsf"] Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.319495 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-utilities\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.319549 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-catalog-content\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.319940 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh74\" (UniqueName: \"kubernetes.io/projected/2219dd21-2e26-4ed0-b937-d28596919965-kube-api-access-hmh74\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.411208 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.413819 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.421814 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-utilities\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.422089 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-catalog-content\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.422213 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmh74\" (UniqueName: \"kubernetes.io/projected/2219dd21-2e26-4ed0-b937-d28596919965-kube-api-access-hmh74\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.424051 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-utilities\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.424399 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.430282 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2219dd21-2e26-4ed0-b937-d28596919965-catalog-content\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.448929 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.478084 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmh74\" (UniqueName: \"kubernetes.io/projected/2219dd21-2e26-4ed0-b937-d28596919965-kube-api-access-hmh74\") pod \"certified-operators-l2lsf\" (UID: \"2219dd21-2e26-4ed0-b937-d28596919965\") " pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.524072 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.524159 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.524218 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxg66\" (UniqueName: \"kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.532324 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.627095 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxg66\" (UniqueName: \"kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.627636 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.628506 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.632049 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.632528 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.653199 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxg66\" (UniqueName: \"kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66\") pod \"community-operators-nj2jh\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.740832 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.893760 4764 generic.go:334] "Generic (PLEG): container finished" podID="c0516c8e-82e6-457c-97e7-503dbf7fb615" containerID="5146dfab80a2d84a58c96d77f3595b473ac6fac1e016eb209bcbd4eddd7dde5d" exitCode=0 Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.893854 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swd5n" event={"ID":"c0516c8e-82e6-457c-97e7-503dbf7fb615","Type":"ContainerDied","Data":"5146dfab80a2d84a58c96d77f3595b473ac6fac1e016eb209bcbd4eddd7dde5d"} Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.899659 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f9dc94e-9e12-4bcf-8074-d996b8003e3a" containerID="6b382282e4271a0570d6dd12318d5871f5fa93a859a9983ee1490a99f97bc5f2" exitCode=0 Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.899691 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgrxd" event={"ID":"9f9dc94e-9e12-4bcf-8074-d996b8003e3a","Type":"ContainerDied","Data":"6b382282e4271a0570d6dd12318d5871f5fa93a859a9983ee1490a99f97bc5f2"} Jan 27 07:21:46 crc kubenswrapper[4764]: I0127 07:21:46.947413 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2lsf"] Jan 27 07:21:46 crc kubenswrapper[4764]: W0127 07:21:46.959507 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2219dd21_2e26_4ed0_b937_d28596919965.slice/crio-6ee0a1dc166f0124cd1eebde80ac41da12c42eaffd2433a864e4ca687f188ba3 WatchSource:0}: Error finding container 6ee0a1dc166f0124cd1eebde80ac41da12c42eaffd2433a864e4ca687f188ba3: Status 404 returned error can't find the container with id 6ee0a1dc166f0124cd1eebde80ac41da12c42eaffd2433a864e4ca687f188ba3 Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.136534 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.907336 4764 generic.go:334] "Generic (PLEG): container finished" podID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerID="039c3619a33dfb800bd42cf6ea5026013e17c37771a019d991ee57dff9c4e900" exitCode=0 Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.907419 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerDied","Data":"039c3619a33dfb800bd42cf6ea5026013e17c37771a019d991ee57dff9c4e900"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.907480 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerStarted","Data":"c80edd556adfc17f11ff65eb3d54b60e54082e987f1d3ae40990adedae894cdc"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.911760 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swd5n" event={"ID":"c0516c8e-82e6-457c-97e7-503dbf7fb615","Type":"ContainerStarted","Data":"2ab70b49c9d852f262a17699d3e0cb6d8645d417ca68bfd7d39e7dcc81d5c54a"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.915391 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgrxd" event={"ID":"9f9dc94e-9e12-4bcf-8074-d996b8003e3a","Type":"ContainerStarted","Data":"96e282316a8f31abfd4a0fca65b1989753dbbfde703231928bc27569a70b58fa"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.917634 4764 generic.go:334] "Generic (PLEG): container finished" podID="2219dd21-2e26-4ed0-b937-d28596919965" containerID="935da0b3f358cc6416c6ebfd42e87c9ed9665c77719e647bd6616189054f0d52" exitCode=0 Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.917681 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2lsf" event={"ID":"2219dd21-2e26-4ed0-b937-d28596919965","Type":"ContainerDied","Data":"935da0b3f358cc6416c6ebfd42e87c9ed9665c77719e647bd6616189054f0d52"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.917704 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2lsf" event={"ID":"2219dd21-2e26-4ed0-b937-d28596919965","Type":"ContainerStarted","Data":"6ee0a1dc166f0124cd1eebde80ac41da12c42eaffd2433a864e4ca687f188ba3"} Jan 27 07:21:47 crc kubenswrapper[4764]: I0127 07:21:47.987185 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-swd5n" podStartSLOduration=2.491625433 podStartE2EDuration="4.987156129s" podCreationTimestamp="2026-01-27 07:21:43 +0000 UTC" firstStartedPulling="2026-01-27 07:21:44.8831723 +0000 UTC m=+317.478794826" lastFinishedPulling="2026-01-27 07:21:47.378702986 +0000 UTC m=+319.974325522" observedRunningTime="2026-01-27 07:21:47.986844641 +0000 UTC m=+320.582467157" watchObservedRunningTime="2026-01-27 07:21:47.987156129 +0000 UTC m=+320.582778655" Jan 27 07:21:48 crc kubenswrapper[4764]: I0127 07:21:48.005133 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgrxd" podStartSLOduration=2.518703185 podStartE2EDuration="5.005109171s" podCreationTimestamp="2026-01-27 07:21:43 +0000 UTC" firstStartedPulling="2026-01-27 07:21:44.88319696 +0000 UTC m=+317.478819526" lastFinishedPulling="2026-01-27 07:21:47.369602946 +0000 UTC m=+319.965225512" observedRunningTime="2026-01-27 07:21:48.002628563 +0000 UTC m=+320.598251089" watchObservedRunningTime="2026-01-27 07:21:48.005109171 +0000 UTC m=+320.600731697" Jan 27 07:21:48 crc kubenswrapper[4764]: I0127 07:21:48.959265 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lbjwb"] Jan 27 07:21:48 crc kubenswrapper[4764]: I0127 07:21:48.960458 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:48 crc kubenswrapper[4764]: I0127 07:21:48.992920 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lbjwb"] Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078038 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-trusted-ca\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078098 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znr24\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-kube-api-access-znr24\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078125 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078148 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-certificates\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078180 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-tls\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078413 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-bound-sa-token\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078518 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.078542 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.118453 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180323 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-bound-sa-token\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180381 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180409 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-trusted-ca\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180445 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znr24\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-kube-api-access-znr24\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180468 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180493 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-certificates\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180520 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-tls\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.180996 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.182470 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-trusted-ca\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.182663 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-certificates\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.188600 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.197973 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-registry-tls\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.210483 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-bound-sa-token\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.224403 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znr24\" (UniqueName: \"kubernetes.io/projected/9ef1649b-4ec6-4648-ae9b-5ddd51defc02-kube-api-access-znr24\") pod \"image-registry-66df7c8f76-lbjwb\" (UID: \"9ef1649b-4ec6-4648-ae9b-5ddd51defc02\") " pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.285999 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.795348 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lbjwb"] Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.929775 4764 generic.go:334] "Generic (PLEG): container finished" podID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerID="0127bbda7c421f24fab2a703804cec087a3353913a01165d279dc8682de983e9" exitCode=0 Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.929882 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerDied","Data":"0127bbda7c421f24fab2a703804cec087a3353913a01165d279dc8682de983e9"} Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.933599 4764 generic.go:334] "Generic (PLEG): container finished" podID="2219dd21-2e26-4ed0-b937-d28596919965" containerID="a627d4c2d673bb2b426d2d5fcc29eed08a32d00769ad66f8ca33fb8700521dfc" exitCode=0 Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.933668 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2lsf" event={"ID":"2219dd21-2e26-4ed0-b937-d28596919965","Type":"ContainerDied","Data":"a627d4c2d673bb2b426d2d5fcc29eed08a32d00769ad66f8ca33fb8700521dfc"} Jan 27 07:21:49 crc kubenswrapper[4764]: I0127 07:21:49.935358 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" event={"ID":"9ef1649b-4ec6-4648-ae9b-5ddd51defc02","Type":"ContainerStarted","Data":"98ed3d6a1cc890dcff3f0e37bef0aa62d9ff6edb90d1959ac625e1357d740014"} Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.945855 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2lsf" event={"ID":"2219dd21-2e26-4ed0-b937-d28596919965","Type":"ContainerStarted","Data":"fce43a78ce04713e08f45d986e8a668a4ae35f8f4bc33f9d128791bdf6ce5dc9"} Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.948618 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" event={"ID":"9ef1649b-4ec6-4648-ae9b-5ddd51defc02","Type":"ContainerStarted","Data":"f23181db163c328d1e69104286e800a482a0580a07cf419e69dfcf7a3f6496af"} Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.948664 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.952123 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerStarted","Data":"3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb"} Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.970248 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2lsf" podStartSLOduration=2.436443854 podStartE2EDuration="4.970226058s" podCreationTimestamp="2026-01-27 07:21:46 +0000 UTC" firstStartedPulling="2026-01-27 07:21:47.919232159 +0000 UTC m=+320.514854685" lastFinishedPulling="2026-01-27 07:21:50.453014363 +0000 UTC m=+323.048636889" observedRunningTime="2026-01-27 07:21:50.968634444 +0000 UTC m=+323.564256990" watchObservedRunningTime="2026-01-27 07:21:50.970226058 +0000 UTC m=+323.565848594" Jan 27 07:21:50 crc kubenswrapper[4764]: I0127 07:21:50.995953 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nj2jh" podStartSLOduration=2.576411388 podStartE2EDuration="4.995929452s" podCreationTimestamp="2026-01-27 07:21:46 +0000 UTC" firstStartedPulling="2026-01-27 07:21:47.909836142 +0000 UTC m=+320.505458668" lastFinishedPulling="2026-01-27 07:21:50.329354206 +0000 UTC m=+322.924976732" observedRunningTime="2026-01-27 07:21:50.994500963 +0000 UTC m=+323.590123499" watchObservedRunningTime="2026-01-27 07:21:50.995929452 +0000 UTC m=+323.591551968" Jan 27 07:21:51 crc kubenswrapper[4764]: I0127 07:21:51.020998 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" podStartSLOduration=3.020976818 podStartE2EDuration="3.020976818s" podCreationTimestamp="2026-01-27 07:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:21:51.017613406 +0000 UTC m=+323.613235932" watchObservedRunningTime="2026-01-27 07:21:51.020976818 +0000 UTC m=+323.616599344" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.741324 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.741995 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.803248 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.931139 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.931611 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:53 crc kubenswrapper[4764]: I0127 07:21:53.980840 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:54 crc kubenswrapper[4764]: I0127 07:21:54.015356 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgrxd" Jan 27 07:21:55 crc kubenswrapper[4764]: I0127 07:21:55.037238 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-swd5n" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.532741 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.532907 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.577634 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.741785 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.741901 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:56 crc kubenswrapper[4764]: I0127 07:21:56.840347 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:21:57 crc kubenswrapper[4764]: I0127 07:21:57.039251 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2lsf" Jan 27 07:21:57 crc kubenswrapper[4764]: I0127 07:21:57.039691 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:22:09 crc kubenswrapper[4764]: I0127 07:22:09.292672 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lbjwb" Jan 27 07:22:09 crc kubenswrapper[4764]: I0127 07:22:09.362146 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:22:09 crc kubenswrapper[4764]: I0127 07:22:09.458421 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:22:09 crc kubenswrapper[4764]: I0127 07:22:09.458791 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" podUID="a23f6090-c437-47e1-85a4-309117e7d37d" containerName="route-controller-manager" containerID="cri-o://fd8d83a7868640b94c4f37aa22d67cc362ac3b1ed75d389b4629b0e7c1ac5254" gracePeriod=30 Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.085814 4764 generic.go:334] "Generic (PLEG): container finished" podID="a23f6090-c437-47e1-85a4-309117e7d37d" containerID="fd8d83a7868640b94c4f37aa22d67cc362ac3b1ed75d389b4629b0e7c1ac5254" exitCode=0 Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.085873 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" event={"ID":"a23f6090-c437-47e1-85a4-309117e7d37d","Type":"ContainerDied","Data":"fd8d83a7868640b94c4f37aa22d67cc362ac3b1ed75d389b4629b0e7c1ac5254"} Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.372555 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.516811 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cq49\" (UniqueName: \"kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49\") pod \"a23f6090-c437-47e1-85a4-309117e7d37d\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.516920 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert\") pod \"a23f6090-c437-47e1-85a4-309117e7d37d\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.516963 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config\") pod \"a23f6090-c437-47e1-85a4-309117e7d37d\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.517007 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca\") pod \"a23f6090-c437-47e1-85a4-309117e7d37d\" (UID: \"a23f6090-c437-47e1-85a4-309117e7d37d\") " Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.518241 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca" (OuterVolumeSpecName: "client-ca") pod "a23f6090-c437-47e1-85a4-309117e7d37d" (UID: "a23f6090-c437-47e1-85a4-309117e7d37d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.518231 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config" (OuterVolumeSpecName: "config") pod "a23f6090-c437-47e1-85a4-309117e7d37d" (UID: "a23f6090-c437-47e1-85a4-309117e7d37d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.521914 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a23f6090-c437-47e1-85a4-309117e7d37d" (UID: "a23f6090-c437-47e1-85a4-309117e7d37d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.523679 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49" (OuterVolumeSpecName: "kube-api-access-8cq49") pod "a23f6090-c437-47e1-85a4-309117e7d37d" (UID: "a23f6090-c437-47e1-85a4-309117e7d37d"). InnerVolumeSpecName "kube-api-access-8cq49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.618577 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cq49\" (UniqueName: \"kubernetes.io/projected/a23f6090-c437-47e1-85a4-309117e7d37d-kube-api-access-8cq49\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.618634 4764 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f6090-c437-47e1-85a4-309117e7d37d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.618652 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.618665 4764 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a23f6090-c437-47e1-85a4-309117e7d37d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.817968 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz"] Jan 27 07:22:10 crc kubenswrapper[4764]: E0127 07:22:10.818640 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23f6090-c437-47e1-85a4-309117e7d37d" containerName="route-controller-manager" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.818755 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23f6090-c437-47e1-85a4-309117e7d37d" containerName="route-controller-manager" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.818963 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23f6090-c437-47e1-85a4-309117e7d37d" containerName="route-controller-manager" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.819701 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.831515 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz"] Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.925325 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aea830e-1908-4331-a328-75590dc62d5b-serving-cert\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.925475 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-config\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.925585 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-client-ca\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:10 crc kubenswrapper[4764]: I0127 07:22:10.925629 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzsw\" (UniqueName: \"kubernetes.io/projected/1aea830e-1908-4331-a328-75590dc62d5b-kube-api-access-cbzsw\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.027098 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-client-ca\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.027178 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzsw\" (UniqueName: \"kubernetes.io/projected/1aea830e-1908-4331-a328-75590dc62d5b-kube-api-access-cbzsw\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.027236 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aea830e-1908-4331-a328-75590dc62d5b-serving-cert\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.027267 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-config\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.028303 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-client-ca\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.028485 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aea830e-1908-4331-a328-75590dc62d5b-config\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.033661 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aea830e-1908-4331-a328-75590dc62d5b-serving-cert\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.050150 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzsw\" (UniqueName: \"kubernetes.io/projected/1aea830e-1908-4331-a328-75590dc62d5b-kube-api-access-cbzsw\") pod \"route-controller-manager-f4fd89747-ssdgz\" (UID: \"1aea830e-1908-4331-a328-75590dc62d5b\") " pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.093975 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" event={"ID":"a23f6090-c437-47e1-85a4-309117e7d37d","Type":"ContainerDied","Data":"8c72f70f0cf9f31039b23d529bfddebc15eea2188767b928b28d3ef2ab2f26db"} Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.094041 4764 scope.go:117] "RemoveContainer" containerID="fd8d83a7868640b94c4f37aa22d67cc362ac3b1ed75d389b4629b0e7c1ac5254" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.094204 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.130027 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.135453 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.137649 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67fd64d77-447ln"] Jan 27 07:22:11 crc kubenswrapper[4764]: I0127 07:22:11.561248 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz"] Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.100122 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" event={"ID":"1aea830e-1908-4331-a328-75590dc62d5b","Type":"ContainerStarted","Data":"94516833a5d37c61a5398a37a1f0d37fd28b18b4644c9a7985cd5c54435652a1"} Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.100388 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" event={"ID":"1aea830e-1908-4331-a328-75590dc62d5b","Type":"ContainerStarted","Data":"fdeeb6f698bb8cdc7a9a1707ff6cc862f03c14daa93b19b06940988ce77b2876"} Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.100412 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.107305 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.118284 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4fd89747-ssdgz" podStartSLOduration=3.118260776 podStartE2EDuration="3.118260776s" podCreationTimestamp="2026-01-27 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:22:12.117372679 +0000 UTC m=+344.712995235" watchObservedRunningTime="2026-01-27 07:22:12.118260776 +0000 UTC m=+344.713883312" Jan 27 07:22:12 crc kubenswrapper[4764]: I0127 07:22:12.446655 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23f6090-c437-47e1-85a4-309117e7d37d" path="/var/lib/kubelet/pods/a23f6090-c437-47e1-85a4-309117e7d37d/volumes" Jan 27 07:22:23 crc kubenswrapper[4764]: I0127 07:22:23.762395 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:22:23 crc kubenswrapper[4764]: I0127 07:22:23.763186 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.402010 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" podUID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" containerName="registry" containerID="cri-o://8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe" gracePeriod=30 Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.835343 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.861403 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6n7l\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.861793 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.861913 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.862160 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.862273 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.862369 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.862472 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.862566 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca\") pod \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\" (UID: \"45ab3850-93b1-42f7-9a7d-243951b7a0d4\") " Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.863732 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.865541 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.873725 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l" (OuterVolumeSpecName: "kube-api-access-v6n7l") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "kube-api-access-v6n7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.878415 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.879712 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.879953 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.890510 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.901911 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "45ab3850-93b1-42f7-9a7d-243951b7a0d4" (UID: "45ab3850-93b1-42f7-9a7d-243951b7a0d4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.964855 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.964923 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6n7l\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-kube-api-access-v6n7l\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.964946 4764 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.964964 4764 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.964984 4764 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45ab3850-93b1-42f7-9a7d-243951b7a0d4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.965001 4764 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45ab3850-93b1-42f7-9a7d-243951b7a0d4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:34 crc kubenswrapper[4764]: I0127 07:22:34.965018 4764 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45ab3850-93b1-42f7-9a7d-243951b7a0d4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.259227 4764 generic.go:334] "Generic (PLEG): container finished" podID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" containerID="8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe" exitCode=0 Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.259468 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" event={"ID":"45ab3850-93b1-42f7-9a7d-243951b7a0d4","Type":"ContainerDied","Data":"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe"} Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.259635 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.259647 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwsfp" event={"ID":"45ab3850-93b1-42f7-9a7d-243951b7a0d4","Type":"ContainerDied","Data":"bfaab6d41866cb949183eb120696fb759edccedb4da3a9f8e26bf17355aa4aaf"} Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.259679 4764 scope.go:117] "RemoveContainer" containerID="8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe" Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.295776 4764 scope.go:117] "RemoveContainer" containerID="8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe" Jan 27 07:22:35 crc kubenswrapper[4764]: E0127 07:22:35.296539 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe\": container with ID starting with 8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe not found: ID does not exist" containerID="8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe" Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.296622 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe"} err="failed to get container status \"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe\": rpc error: code = NotFound desc = could not find container \"8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe\": container with ID starting with 8ca7ec193a0f26563fb870272a9b86d2e889919ea5ae08782faadf6e66248bbe not found: ID does not exist" Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.311157 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:22:35 crc kubenswrapper[4764]: I0127 07:22:35.318374 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwsfp"] Jan 27 07:22:36 crc kubenswrapper[4764]: I0127 07:22:36.446727 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" path="/var/lib/kubelet/pods/45ab3850-93b1-42f7-9a7d-243951b7a0d4/volumes" Jan 27 07:22:53 crc kubenswrapper[4764]: I0127 07:22:53.762728 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:22:53 crc kubenswrapper[4764]: I0127 07:22:53.763259 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:23:23 crc kubenswrapper[4764]: I0127 07:23:23.762665 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:23:23 crc kubenswrapper[4764]: I0127 07:23:23.763565 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:23:23 crc kubenswrapper[4764]: I0127 07:23:23.763622 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:23:23 crc kubenswrapper[4764]: I0127 07:23:23.764295 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:23:23 crc kubenswrapper[4764]: I0127 07:23:23.764355 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163" gracePeriod=600 Jan 27 07:23:24 crc kubenswrapper[4764]: I0127 07:23:24.592748 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163" exitCode=0 Jan 27 07:23:24 crc kubenswrapper[4764]: I0127 07:23:24.592851 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163"} Jan 27 07:23:24 crc kubenswrapper[4764]: I0127 07:23:24.593238 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8"} Jan 27 07:23:24 crc kubenswrapper[4764]: I0127 07:23:24.593269 4764 scope.go:117] "RemoveContainer" containerID="aa317030ae3b5517f30b8ee9d324abd7c9d9044791c34ae9f19de3e6e0be2c13" Jan 27 07:25:28 crc kubenswrapper[4764]: I0127 07:25:28.658990 4764 scope.go:117] "RemoveContainer" containerID="b106d7da0f6fe1591ba60d291a38930091d2e85d0943d16e991defb194dff610" Jan 27 07:25:53 crc kubenswrapper[4764]: I0127 07:25:53.763139 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:25:53 crc kubenswrapper[4764]: I0127 07:25:53.764757 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:26:23 crc kubenswrapper[4764]: I0127 07:26:23.763250 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:26:23 crc kubenswrapper[4764]: I0127 07:26:23.765567 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:26:28 crc kubenswrapper[4764]: I0127 07:26:28.697374 4764 scope.go:117] "RemoveContainer" containerID="ec2a85f7043799abcc4a27a511949533bd5e6e938672d50cea1bb8ccaca06c42" Jan 27 07:26:28 crc kubenswrapper[4764]: I0127 07:26:28.737046 4764 scope.go:117] "RemoveContainer" containerID="d89860d13ab662dcc8d7989c236d905b597c28bd0d41c6a49b2debb659552f1f" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.838973 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5np2p"] Jan 27 07:26:48 crc kubenswrapper[4764]: E0127 07:26:48.839853 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" containerName="registry" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.839872 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" containerName="registry" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.840011 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ab3850-93b1-42f7-9a7d-243951b7a0d4" containerName="registry" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.842143 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.845318 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.845499 4764 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rstv5" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.845634 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.854880 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2j2dp"] Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.855606 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2j2dp" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.857563 4764 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hc4ct" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.864745 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-554kw"] Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.865699 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.869997 4764 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xff8j" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.874696 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5np2p"] Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.879917 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2j2dp"] Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.908061 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-554kw"] Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.997237 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvw5l\" (UniqueName: \"kubernetes.io/projected/5da1d510-00c8-417b-9e20-8d85290affac-kube-api-access-lvw5l\") pod \"cert-manager-858654f9db-2j2dp\" (UID: \"5da1d510-00c8-417b-9e20-8d85290affac\") " pod="cert-manager/cert-manager-858654f9db-2j2dp" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.997301 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm2vk\" (UniqueName: \"kubernetes.io/projected/e2e23dd2-6c48-4820-9c25-530e99756477-kube-api-access-gm2vk\") pod \"cert-manager-cainjector-cf98fcc89-5np2p\" (UID: \"e2e23dd2-6c48-4820-9c25-530e99756477\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" Jan 27 07:26:48 crc kubenswrapper[4764]: I0127 07:26:48.997345 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgnd\" (UniqueName: \"kubernetes.io/projected/928f9ba5-2664-4c52-8c0e-786eedf18952-kube-api-access-7tgnd\") pod \"cert-manager-webhook-687f57d79b-554kw\" (UID: \"928f9ba5-2664-4c52-8c0e-786eedf18952\") " pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.099063 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvw5l\" (UniqueName: \"kubernetes.io/projected/5da1d510-00c8-417b-9e20-8d85290affac-kube-api-access-lvw5l\") pod \"cert-manager-858654f9db-2j2dp\" (UID: \"5da1d510-00c8-417b-9e20-8d85290affac\") " pod="cert-manager/cert-manager-858654f9db-2j2dp" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.099122 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm2vk\" (UniqueName: \"kubernetes.io/projected/e2e23dd2-6c48-4820-9c25-530e99756477-kube-api-access-gm2vk\") pod \"cert-manager-cainjector-cf98fcc89-5np2p\" (UID: \"e2e23dd2-6c48-4820-9c25-530e99756477\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.099156 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgnd\" (UniqueName: \"kubernetes.io/projected/928f9ba5-2664-4c52-8c0e-786eedf18952-kube-api-access-7tgnd\") pod \"cert-manager-webhook-687f57d79b-554kw\" (UID: \"928f9ba5-2664-4c52-8c0e-786eedf18952\") " pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.123649 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgnd\" (UniqueName: \"kubernetes.io/projected/928f9ba5-2664-4c52-8c0e-786eedf18952-kube-api-access-7tgnd\") pod \"cert-manager-webhook-687f57d79b-554kw\" (UID: \"928f9ba5-2664-4c52-8c0e-786eedf18952\") " pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.123649 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm2vk\" (UniqueName: \"kubernetes.io/projected/e2e23dd2-6c48-4820-9c25-530e99756477-kube-api-access-gm2vk\") pod \"cert-manager-cainjector-cf98fcc89-5np2p\" (UID: \"e2e23dd2-6c48-4820-9c25-530e99756477\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.129144 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvw5l\" (UniqueName: \"kubernetes.io/projected/5da1d510-00c8-417b-9e20-8d85290affac-kube-api-access-lvw5l\") pod \"cert-manager-858654f9db-2j2dp\" (UID: \"5da1d510-00c8-417b-9e20-8d85290affac\") " pod="cert-manager/cert-manager-858654f9db-2j2dp" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.167330 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.180306 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2j2dp" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.189051 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.457390 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2j2dp"] Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.461431 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.708351 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-554kw"] Jan 27 07:26:49 crc kubenswrapper[4764]: W0127 07:26:49.714816 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e23dd2_6c48_4820_9c25_530e99756477.slice/crio-2da0780262fbf8f207298d2370a0fd4aaa6964e87a577f60931b1dfef8766350 WatchSource:0}: Error finding container 2da0780262fbf8f207298d2370a0fd4aaa6964e87a577f60931b1dfef8766350: Status 404 returned error can't find the container with id 2da0780262fbf8f207298d2370a0fd4aaa6964e87a577f60931b1dfef8766350 Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.715510 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5np2p"] Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.937616 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" event={"ID":"928f9ba5-2664-4c52-8c0e-786eedf18952","Type":"ContainerStarted","Data":"8cbd33a21b19ad67f0d41360cea833bc85e11e2bf492f7dfa7ebaa96700b48d4"} Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.940476 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2j2dp" event={"ID":"5da1d510-00c8-417b-9e20-8d85290affac","Type":"ContainerStarted","Data":"807e21243b006997d136c2001f52c3ac1684238a12a99fbc62324eab691495b4"} Jan 27 07:26:49 crc kubenswrapper[4764]: I0127 07:26:49.941982 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" event={"ID":"e2e23dd2-6c48-4820-9c25-530e99756477","Type":"ContainerStarted","Data":"2da0780262fbf8f207298d2370a0fd4aaa6964e87a577f60931b1dfef8766350"} Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.763562 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.764853 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.764946 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.768360 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.768546 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8" gracePeriod=600 Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.979566 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8" exitCode=0 Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.979639 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8"} Jan 27 07:26:53 crc kubenswrapper[4764]: I0127 07:26:53.979694 4764 scope.go:117] "RemoveContainer" containerID="1c2d8be3e5ce23d3b5629115703f8692a01449add9bb3fca83dcedde2638f163" Jan 27 07:26:54 crc kubenswrapper[4764]: I0127 07:26:54.987560 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" event={"ID":"e2e23dd2-6c48-4820-9c25-530e99756477","Type":"ContainerStarted","Data":"71191e35a17fff945cc64cddbe01de3db92e658bf62c9cfa1a6117328ec57703"} Jan 27 07:26:54 crc kubenswrapper[4764]: I0127 07:26:54.989004 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" event={"ID":"928f9ba5-2664-4c52-8c0e-786eedf18952","Type":"ContainerStarted","Data":"7161ae19a857e2001e8219ddfb734b7346146be6237f50a9c609ae42a07d8577"} Jan 27 07:26:54 crc kubenswrapper[4764]: I0127 07:26:54.989165 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:54 crc kubenswrapper[4764]: I0127 07:26:54.992014 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2j2dp" event={"ID":"5da1d510-00c8-417b-9e20-8d85290affac","Type":"ContainerStarted","Data":"13f581d3f6c4f7a458a5eb0c8f076ba1accfcf98ac3138362196fde09081429b"} Jan 27 07:26:54 crc kubenswrapper[4764]: I0127 07:26:54.995297 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e"} Jan 27 07:26:55 crc kubenswrapper[4764]: I0127 07:26:55.010140 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5np2p" podStartSLOduration=2.696802025 podStartE2EDuration="7.010118391s" podCreationTimestamp="2026-01-27 07:26:48 +0000 UTC" firstStartedPulling="2026-01-27 07:26:49.717777194 +0000 UTC m=+622.313399730" lastFinishedPulling="2026-01-27 07:26:54.03109357 +0000 UTC m=+626.626716096" observedRunningTime="2026-01-27 07:26:55.009178746 +0000 UTC m=+627.604801272" watchObservedRunningTime="2026-01-27 07:26:55.010118391 +0000 UTC m=+627.605740917" Jan 27 07:26:55 crc kubenswrapper[4764]: I0127 07:26:55.036745 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2j2dp" podStartSLOduration=2.494118935 podStartE2EDuration="7.036717367s" podCreationTimestamp="2026-01-27 07:26:48 +0000 UTC" firstStartedPulling="2026-01-27 07:26:49.461188363 +0000 UTC m=+622.056810889" lastFinishedPulling="2026-01-27 07:26:54.003786795 +0000 UTC m=+626.599409321" observedRunningTime="2026-01-27 07:26:55.033225526 +0000 UTC m=+627.628848062" watchObservedRunningTime="2026-01-27 07:26:55.036717367 +0000 UTC m=+627.632339883" Jan 27 07:26:55 crc kubenswrapper[4764]: I0127 07:26:55.079313 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" podStartSLOduration=2.759436515 podStartE2EDuration="7.079283802s" podCreationTimestamp="2026-01-27 07:26:48 +0000 UTC" firstStartedPulling="2026-01-27 07:26:49.710264577 +0000 UTC m=+622.305887093" lastFinishedPulling="2026-01-27 07:26:54.030111854 +0000 UTC m=+626.625734380" observedRunningTime="2026-01-27 07:26:55.076060198 +0000 UTC m=+627.671682724" watchObservedRunningTime="2026-01-27 07:26:55.079283802 +0000 UTC m=+627.674906328" Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.239993 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gwmsf"] Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241253 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-controller" containerID="cri-o://e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241336 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="sbdb" containerID="cri-o://6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241323 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="nbdb" containerID="cri-o://55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241418 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="northd" containerID="cri-o://137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241415 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-node" containerID="cri-o://6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241460 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-acl-logging" containerID="cri-o://0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.241371 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.282764 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" containerID="cri-o://f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" gracePeriod=30 Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.994939 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/3.log" Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.998704 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovn-acl-logging/0.log" Jan 27 07:26:58 crc kubenswrapper[4764]: I0127 07:26:58.999625 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovn-controller/0.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.000236 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.019793 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovnkube-controller/3.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.022753 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovn-acl-logging/0.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023339 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gwmsf_91863a32-a5e4-42d3-9d33-d672d2f1300d/ovn-controller/0.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023781 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023823 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023836 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023847 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023859 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023869 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" exitCode=0 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023878 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" exitCode=143 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023887 4764 generic.go:334] "Generic (PLEG): container finished" podID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" exitCode=143 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023874 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023959 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.023980 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024005 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024021 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024036 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024050 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024065 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024079 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024088 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024096 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024104 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024112 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024121 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024129 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024136 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024147 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024159 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024169 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024176 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024184 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024192 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024199 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024206 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024213 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024220 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024227 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024237 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024248 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024256 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024266 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024275 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024283 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024291 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024299 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024307 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024314 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024321 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024333 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwmsf" event={"ID":"91863a32-a5e4-42d3-9d33-d672d2f1300d","Type":"ContainerDied","Data":"76f207b0105a912abd4edf63e3b1cfaf7e68f0a9ebc73a8f035437575dc49046"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024344 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024354 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024362 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024369 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024377 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024385 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024392 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024399 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024406 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024413 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.024430 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.026835 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/2.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.028644 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/1.log" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.028695 4764 generic.go:334] "Generic (PLEG): container finished" podID="e936b8fc-81d9-4222-a66f-742b2db87386" containerID="14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201" exitCode=2 Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.028728 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerDied","Data":"14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.028784 4764 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096"} Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.029586 4764 scope.go:117] "RemoveContainer" containerID="14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.030084 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dvbb_openshift-multus(e936b8fc-81d9-4222-a66f-742b2db87386)\"" pod="openshift-multus/multus-2dvbb" podUID="e936b8fc-81d9-4222-a66f-742b2db87386" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.057425 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067252 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l8j52"] Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067591 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="sbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067611 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="sbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067625 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067631 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067640 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067665 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067674 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kubecfg-setup" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067680 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kubecfg-setup" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067687 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-node" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067693 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-node" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067701 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-acl-logging" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067706 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-acl-logging" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067717 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067742 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067751 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="nbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067756 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="nbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067766 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="northd" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067772 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="northd" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067781 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067786 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.067792 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067798 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067932 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="sbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067945 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067952 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-acl-logging" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067977 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="northd" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067986 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="nbdb" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.067994 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068001 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-node" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068009 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068017 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068023 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovn-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068029 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.068153 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068161 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.068172 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.068177 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.071177 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" containerName="ovnkube-controller" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.073023 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.083391 4764 scope.go:117] "RemoveContainer" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.100688 4764 scope.go:117] "RemoveContainer" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.119613 4764 scope.go:117] "RemoveContainer" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.132704 4764 scope.go:117] "RemoveContainer" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.144746 4764 scope.go:117] "RemoveContainer" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.155753 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf6fw\" (UniqueName: \"kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.155839 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.155862 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.155887 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.155992 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156048 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156007 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156046 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156117 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156073 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156147 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156176 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156198 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156209 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156231 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket" (OuterVolumeSpecName: "log-socket") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156242 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156275 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156290 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156315 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156344 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156361 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156423 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156467 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156496 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156529 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156553 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156589 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash\") pod \"91863a32-a5e4-42d3-9d33-d672d2f1300d\" (UID: \"91863a32-a5e4-42d3-9d33-d672d2f1300d\") " Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156863 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156899 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156925 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156960 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.156987 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log" (OuterVolumeSpecName: "node-log") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.157015 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.157040 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.157064 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash" (OuterVolumeSpecName: "host-slash") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.157306 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.157591 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158061 4764 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158194 4764 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158205 4764 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158215 4764 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158228 4764 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158240 4764 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158252 4764 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158265 4764 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158277 4764 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158335 4764 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158359 4764 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158377 4764 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158416 4764 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158431 4764 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158475 4764 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158484 4764 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.158494 4764 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91863a32-a5e4-42d3-9d33-d672d2f1300d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.159358 4764 scope.go:117] "RemoveContainer" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.162660 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.163073 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw" (OuterVolumeSpecName: "kube-api-access-pf6fw") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "kube-api-access-pf6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.173718 4764 scope.go:117] "RemoveContainer" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.173847 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "91863a32-a5e4-42d3-9d33-d672d2f1300d" (UID: "91863a32-a5e4-42d3-9d33-d672d2f1300d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.192169 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-554kw" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.196295 4764 scope.go:117] "RemoveContainer" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.216724 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.217118 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217149 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} err="failed to get container status \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217170 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.217402 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": container with ID starting with e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59 not found: ID does not exist" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217423 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} err="failed to get container status \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": rpc error: code = NotFound desc = could not find container \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": container with ID starting with e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217460 4764 scope.go:117] "RemoveContainer" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.217945 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": container with ID starting with 6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22 not found: ID does not exist" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217969 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} err="failed to get container status \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": rpc error: code = NotFound desc = could not find container \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": container with ID starting with 6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.217987 4764 scope.go:117] "RemoveContainer" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.218208 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": container with ID starting with 55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660 not found: ID does not exist" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.218234 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} err="failed to get container status \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": rpc error: code = NotFound desc = could not find container \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": container with ID starting with 55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.218253 4764 scope.go:117] "RemoveContainer" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.218758 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": container with ID starting with 137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265 not found: ID does not exist" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219185 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} err="failed to get container status \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": rpc error: code = NotFound desc = could not find container \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": container with ID starting with 137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219201 4764 scope.go:117] "RemoveContainer" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.219462 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": container with ID starting with bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe not found: ID does not exist" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219485 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} err="failed to get container status \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": rpc error: code = NotFound desc = could not find container \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": container with ID starting with bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219501 4764 scope.go:117] "RemoveContainer" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.219715 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": container with ID starting with 6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f not found: ID does not exist" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219737 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} err="failed to get container status \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": rpc error: code = NotFound desc = could not find container \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": container with ID starting with 6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.219753 4764 scope.go:117] "RemoveContainer" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.220069 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": container with ID starting with 0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443 not found: ID does not exist" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220094 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} err="failed to get container status \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": rpc error: code = NotFound desc = could not find container \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": container with ID starting with 0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220111 4764 scope.go:117] "RemoveContainer" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.220387 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": container with ID starting with e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97 not found: ID does not exist" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220419 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} err="failed to get container status \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": rpc error: code = NotFound desc = could not find container \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": container with ID starting with e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220462 4764 scope.go:117] "RemoveContainer" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: E0127 07:26:59.220678 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": container with ID starting with 5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f not found: ID does not exist" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220710 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} err="failed to get container status \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": rpc error: code = NotFound desc = could not find container \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": container with ID starting with 5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.220730 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221022 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} err="failed to get container status \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221046 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221225 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} err="failed to get container status \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": rpc error: code = NotFound desc = could not find container \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": container with ID starting with e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221243 4764 scope.go:117] "RemoveContainer" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221477 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} err="failed to get container status \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": rpc error: code = NotFound desc = could not find container \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": container with ID starting with 6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221497 4764 scope.go:117] "RemoveContainer" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221740 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} err="failed to get container status \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": rpc error: code = NotFound desc = could not find container \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": container with ID starting with 55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.221768 4764 scope.go:117] "RemoveContainer" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222029 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} err="failed to get container status \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": rpc error: code = NotFound desc = could not find container \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": container with ID starting with 137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222058 4764 scope.go:117] "RemoveContainer" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222300 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} err="failed to get container status \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": rpc error: code = NotFound desc = could not find container \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": container with ID starting with bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222329 4764 scope.go:117] "RemoveContainer" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222569 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} err="failed to get container status \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": rpc error: code = NotFound desc = could not find container \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": container with ID starting with 6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222596 4764 scope.go:117] "RemoveContainer" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222804 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} err="failed to get container status \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": rpc error: code = NotFound desc = could not find container \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": container with ID starting with 0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.222830 4764 scope.go:117] "RemoveContainer" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223016 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} err="failed to get container status \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": rpc error: code = NotFound desc = could not find container \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": container with ID starting with e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223043 4764 scope.go:117] "RemoveContainer" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223276 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} err="failed to get container status \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": rpc error: code = NotFound desc = could not find container \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": container with ID starting with 5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223306 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223543 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} err="failed to get container status \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223569 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223832 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} err="failed to get container status \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": rpc error: code = NotFound desc = could not find container \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": container with ID starting with e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.223859 4764 scope.go:117] "RemoveContainer" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224039 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} err="failed to get container status \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": rpc error: code = NotFound desc = could not find container \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": container with ID starting with 6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224067 4764 scope.go:117] "RemoveContainer" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224550 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} err="failed to get container status \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": rpc error: code = NotFound desc = could not find container \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": container with ID starting with 55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224573 4764 scope.go:117] "RemoveContainer" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224790 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} err="failed to get container status \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": rpc error: code = NotFound desc = could not find container \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": container with ID starting with 137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.224806 4764 scope.go:117] "RemoveContainer" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225090 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} err="failed to get container status \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": rpc error: code = NotFound desc = could not find container \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": container with ID starting with bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225115 4764 scope.go:117] "RemoveContainer" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225381 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} err="failed to get container status \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": rpc error: code = NotFound desc = could not find container \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": container with ID starting with 6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225407 4764 scope.go:117] "RemoveContainer" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225620 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} err="failed to get container status \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": rpc error: code = NotFound desc = could not find container \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": container with ID starting with 0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225648 4764 scope.go:117] "RemoveContainer" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225796 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} err="failed to get container status \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": rpc error: code = NotFound desc = could not find container \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": container with ID starting with e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.225823 4764 scope.go:117] "RemoveContainer" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226092 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} err="failed to get container status \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": rpc error: code = NotFound desc = could not find container \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": container with ID starting with 5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226113 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226321 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} err="failed to get container status \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226342 4764 scope.go:117] "RemoveContainer" containerID="e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226507 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59"} err="failed to get container status \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": rpc error: code = NotFound desc = could not find container \"e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59\": container with ID starting with e48552d49b0267dcd97956a5d9839b831d17c811fdbf5f8f6d6515deeec90f59 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226531 4764 scope.go:117] "RemoveContainer" containerID="6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226816 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22"} err="failed to get container status \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": rpc error: code = NotFound desc = could not find container \"6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22\": container with ID starting with 6b5ffd263f27c6a028cad8dda14d1a9d2810de738b7edb92484d129ffb679d22 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.226830 4764 scope.go:117] "RemoveContainer" containerID="55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227011 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660"} err="failed to get container status \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": rpc error: code = NotFound desc = could not find container \"55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660\": container with ID starting with 55a6ab6b91a4f03ae274e0ed042b5023bcb1b349104e77fb8a9994ee06d8a660 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227028 4764 scope.go:117] "RemoveContainer" containerID="137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227221 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265"} err="failed to get container status \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": rpc error: code = NotFound desc = could not find container \"137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265\": container with ID starting with 137783513ce4852bd9cb97cc9c56bdf8fc191b5857ed4f12f7b7de5d0c846265 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227240 4764 scope.go:117] "RemoveContainer" containerID="bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227405 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe"} err="failed to get container status \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": rpc error: code = NotFound desc = could not find container \"bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe\": container with ID starting with bd2f2d083ac80fc26acfd4b5719618b0892a47a1f8c10f0cf711eca46a935ebe not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227426 4764 scope.go:117] "RemoveContainer" containerID="6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227624 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f"} err="failed to get container status \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": rpc error: code = NotFound desc = could not find container \"6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f\": container with ID starting with 6b5c5db9391fa1f2ebb462d3b49ca3fa85cbdc6c2b706ce1f0ad780e23c7eb5f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227651 4764 scope.go:117] "RemoveContainer" containerID="0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227855 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443"} err="failed to get container status \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": rpc error: code = NotFound desc = could not find container \"0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443\": container with ID starting with 0d9375d3e296f20320fa06e1cdfb9a875a1aa3470397cd5b8529999f5ce8b443 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.227874 4764 scope.go:117] "RemoveContainer" containerID="e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.228040 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97"} err="failed to get container status \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": rpc error: code = NotFound desc = could not find container \"e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97\": container with ID starting with e51f59950155a20756606fd2b880e5259e36042f9df1e21cb68ceb15eb665b97 not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.228058 4764 scope.go:117] "RemoveContainer" containerID="5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.228220 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f"} err="failed to get container status \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": rpc error: code = NotFound desc = could not find container \"5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f\": container with ID starting with 5ead9b4bfc85ae42eb30b38a82b6b69b1cabf1b1cbf2402cf42bf3282435f82f not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.228238 4764 scope.go:117] "RemoveContainer" containerID="f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.228391 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e"} err="failed to get container status \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": rpc error: code = NotFound desc = could not find container \"f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e\": container with ID starting with f916698151cdbed0de99ddd2ab3095024154aa7034b6a697d006014fd000547e not found: ID does not exist" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.259888 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-node-log\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.259957 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-config\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260009 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-slash\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260028 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-systemd-units\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260051 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovn-node-metrics-cert\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260081 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-script-lib\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260154 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-netns\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260185 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260207 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-ovn\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260230 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-bin\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260292 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-netd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260316 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-etc-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260338 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-var-lib-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260377 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260408 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-systemd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260454 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260481 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-env-overrides\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260527 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ttgx\" (UniqueName: \"kubernetes.io/projected/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-kube-api-access-6ttgx\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260559 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-log-socket\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260579 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-kubelet\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260633 4764 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91863a32-a5e4-42d3-9d33-d672d2f1300d-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260651 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf6fw\" (UniqueName: \"kubernetes.io/projected/91863a32-a5e4-42d3-9d33-d672d2f1300d-kube-api-access-pf6fw\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.260668 4764 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91863a32-a5e4-42d3-9d33-d672d2f1300d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.355497 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gwmsf"] Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.361655 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gwmsf"] Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362412 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-node-log\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362560 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-config\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362598 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-slash\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362635 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-systemd-units\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362695 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovn-node-metrics-cert\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362730 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-script-lib\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362766 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-netns\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362805 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362839 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-ovn\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362874 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-bin\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362912 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-netd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362947 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-etc-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.362984 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-var-lib-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363031 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363076 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-systemd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363112 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363153 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-env-overrides\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363198 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ttgx\" (UniqueName: \"kubernetes.io/projected/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-kube-api-access-6ttgx\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363229 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-log-socket\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363262 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-kubelet\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363345 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-config\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363386 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-kubelet\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363688 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-netd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363756 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-etc-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363806 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-var-lib-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363851 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363893 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-systemd\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363932 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.363994 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-cni-bin\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.364041 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-slash\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.364068 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-systemd-units\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.364398 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-node-log\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.364550 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-host-run-netns\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.364948 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-env-overrides\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.365013 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-openvswitch\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.365047 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-run-ovn\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.365070 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-log-socket\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.365521 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovnkube-script-lib\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.369788 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-ovn-node-metrics-cert\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.379583 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ttgx\" (UniqueName: \"kubernetes.io/projected/031aba74-3c9b-4458-9e8d-f5a7bc09e7f6-kube-api-access-6ttgx\") pod \"ovnkube-node-l8j52\" (UID: \"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:26:59 crc kubenswrapper[4764]: I0127 07:26:59.392886 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:00 crc kubenswrapper[4764]: I0127 07:27:00.040626 4764 generic.go:334] "Generic (PLEG): container finished" podID="031aba74-3c9b-4458-9e8d-f5a7bc09e7f6" containerID="adebd6f929a660eacb071e37c05efd687e2e90ad3481fc4c580880b27aa3ffa4" exitCode=0 Jan 27 07:27:00 crc kubenswrapper[4764]: I0127 07:27:00.040724 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerDied","Data":"adebd6f929a660eacb071e37c05efd687e2e90ad3481fc4c580880b27aa3ffa4"} Jan 27 07:27:00 crc kubenswrapper[4764]: I0127 07:27:00.041084 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"7b4c3e2cbd827c0558020aad9dbec8b195eba43686ccf8e3d8e9760c9283185b"} Jan 27 07:27:00 crc kubenswrapper[4764]: I0127 07:27:00.449130 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91863a32-a5e4-42d3-9d33-d672d2f1300d" path="/var/lib/kubelet/pods/91863a32-a5e4-42d3-9d33-d672d2f1300d/volumes" Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053102 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"3144e14f846b6cbab0e5b1e19413cd397dac1fa5b0da7cd013b0faac991a9003"} Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053398 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"ff3707284f865c695a1b0030ef078f063e7d957af747ab38a4b6831f1e539be0"} Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053410 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"8cfa264bb0f5cf035ca34d8280b9084d637958a7110963a14e92f948b2cc4c9f"} Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053421 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"db9f59f2d798d11861e186ab06da6cdfda8958a2e853149ddee9ac57f4978c22"} Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053431 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"371af6ed56a7eb9405c6ac8bb169275fe8724ec897d1cfe57a420917b50fa998"} Jan 27 07:27:01 crc kubenswrapper[4764]: I0127 07:27:01.053465 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"65c06f484fc1eef10e831538278881f4d33d0cf4ee78859aa5edd65b7857fad7"} Jan 27 07:27:04 crc kubenswrapper[4764]: I0127 07:27:04.079051 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"78c368802229d5f53407870ea74dd3b6ef366e95202d74bd0c0033101d923b43"} Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.096236 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" event={"ID":"031aba74-3c9b-4458-9e8d-f5a7bc09e7f6","Type":"ContainerStarted","Data":"7571b706140999c0679c64a9a4f1df6fbbf7a3cb8c8b4de71b759e6c2e90b1d6"} Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.097067 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.097100 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.097123 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.128348 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.133295 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" podStartSLOduration=7.133273704 podStartE2EDuration="7.133273704s" podCreationTimestamp="2026-01-27 07:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:27:06.130154413 +0000 UTC m=+638.725776959" watchObservedRunningTime="2026-01-27 07:27:06.133273704 +0000 UTC m=+638.728896240" Jan 27 07:27:06 crc kubenswrapper[4764]: I0127 07:27:06.138717 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:12 crc kubenswrapper[4764]: I0127 07:27:12.438986 4764 scope.go:117] "RemoveContainer" containerID="14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201" Jan 27 07:27:12 crc kubenswrapper[4764]: E0127 07:27:12.439793 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dvbb_openshift-multus(e936b8fc-81d9-4222-a66f-742b2db87386)\"" pod="openshift-multus/multus-2dvbb" podUID="e936b8fc-81d9-4222-a66f-742b2db87386" Jan 27 07:27:26 crc kubenswrapper[4764]: I0127 07:27:26.438286 4764 scope.go:117] "RemoveContainer" containerID="14f862a72cf29d8fbfe9000a4f79195fca75a7ac58adf7a9a30d20280697f201" Jan 27 07:27:27 crc kubenswrapper[4764]: I0127 07:27:27.235630 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/2.log" Jan 27 07:27:27 crc kubenswrapper[4764]: I0127 07:27:27.237056 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/1.log" Jan 27 07:27:27 crc kubenswrapper[4764]: I0127 07:27:27.237133 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dvbb" event={"ID":"e936b8fc-81d9-4222-a66f-742b2db87386","Type":"ContainerStarted","Data":"7b8293ae741232a0cf85fd11802fc41661c673a193e3d1a860ac70be5778a672"} Jan 27 07:27:28 crc kubenswrapper[4764]: I0127 07:27:28.789966 4764 scope.go:117] "RemoveContainer" containerID="edc24ad375381fa80874ff8057876239cccc37d52ceb2da5bcd5b29230bf4096" Jan 27 07:27:29 crc kubenswrapper[4764]: I0127 07:27:29.253420 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dvbb_e936b8fc-81d9-4222-a66f-742b2db87386/kube-multus/2.log" Jan 27 07:27:29 crc kubenswrapper[4764]: I0127 07:27:29.421100 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l8j52" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.720618 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr"] Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.722595 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.724727 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.733618 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr"] Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.778830 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqhx\" (UniqueName: \"kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.778891 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.778980 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.880727 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.880806 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqhx\" (UniqueName: \"kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.880875 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.881606 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.882230 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:40 crc kubenswrapper[4764]: I0127 07:27:40.902621 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqhx\" (UniqueName: \"kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:41 crc kubenswrapper[4764]: I0127 07:27:41.041349 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:41 crc kubenswrapper[4764]: I0127 07:27:41.274503 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr"] Jan 27 07:27:41 crc kubenswrapper[4764]: I0127 07:27:41.329708 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" event={"ID":"c5dce277-e909-47e5-bbae-57b47e19613b","Type":"ContainerStarted","Data":"a7e9aed5e31a4c4889f84aee84bfaa72d82d75775c13f83cd08405bb7c5f0306"} Jan 27 07:27:42 crc kubenswrapper[4764]: I0127 07:27:42.340360 4764 generic.go:334] "Generic (PLEG): container finished" podID="c5dce277-e909-47e5-bbae-57b47e19613b" containerID="e55df9d4e951fcb6af622c73f9b3e366d666e193acfb5ebdc616c4ae3a06ab51" exitCode=0 Jan 27 07:27:42 crc kubenswrapper[4764]: I0127 07:27:42.340471 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" event={"ID":"c5dce277-e909-47e5-bbae-57b47e19613b","Type":"ContainerDied","Data":"e55df9d4e951fcb6af622c73f9b3e366d666e193acfb5ebdc616c4ae3a06ab51"} Jan 27 07:27:44 crc kubenswrapper[4764]: I0127 07:27:44.364272 4764 generic.go:334] "Generic (PLEG): container finished" podID="c5dce277-e909-47e5-bbae-57b47e19613b" containerID="da5a5eb59fad02db79c4e22bc389eeca8fd4863c8eb242e0ffd86e537623a558" exitCode=0 Jan 27 07:27:44 crc kubenswrapper[4764]: I0127 07:27:44.364349 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" event={"ID":"c5dce277-e909-47e5-bbae-57b47e19613b","Type":"ContainerDied","Data":"da5a5eb59fad02db79c4e22bc389eeca8fd4863c8eb242e0ffd86e537623a558"} Jan 27 07:27:45 crc kubenswrapper[4764]: I0127 07:27:45.373078 4764 generic.go:334] "Generic (PLEG): container finished" podID="c5dce277-e909-47e5-bbae-57b47e19613b" containerID="6e974fb60998c2a86b01bbcbc44ec483522c686f0607ce9d8bb2a0e868c353da" exitCode=0 Jan 27 07:27:45 crc kubenswrapper[4764]: I0127 07:27:45.373145 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" event={"ID":"c5dce277-e909-47e5-bbae-57b47e19613b","Type":"ContainerDied","Data":"6e974fb60998c2a86b01bbcbc44ec483522c686f0607ce9d8bb2a0e868c353da"} Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.627919 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.776571 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxqhx\" (UniqueName: \"kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx\") pod \"c5dce277-e909-47e5-bbae-57b47e19613b\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.776697 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util\") pod \"c5dce277-e909-47e5-bbae-57b47e19613b\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.776747 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle\") pod \"c5dce277-e909-47e5-bbae-57b47e19613b\" (UID: \"c5dce277-e909-47e5-bbae-57b47e19613b\") " Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.777952 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle" (OuterVolumeSpecName: "bundle") pod "c5dce277-e909-47e5-bbae-57b47e19613b" (UID: "c5dce277-e909-47e5-bbae-57b47e19613b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.791759 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx" (OuterVolumeSpecName: "kube-api-access-rxqhx") pod "c5dce277-e909-47e5-bbae-57b47e19613b" (UID: "c5dce277-e909-47e5-bbae-57b47e19613b"). InnerVolumeSpecName "kube-api-access-rxqhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.792827 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util" (OuterVolumeSpecName: "util") pod "c5dce277-e909-47e5-bbae-57b47e19613b" (UID: "c5dce277-e909-47e5-bbae-57b47e19613b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.878308 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxqhx\" (UniqueName: \"kubernetes.io/projected/c5dce277-e909-47e5-bbae-57b47e19613b-kube-api-access-rxqhx\") on node \"crc\" DevicePath \"\"" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.878885 4764 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:27:46 crc kubenswrapper[4764]: I0127 07:27:46.878926 4764 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5dce277-e909-47e5-bbae-57b47e19613b-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:27:47 crc kubenswrapper[4764]: I0127 07:27:47.390701 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" event={"ID":"c5dce277-e909-47e5-bbae-57b47e19613b","Type":"ContainerDied","Data":"a7e9aed5e31a4c4889f84aee84bfaa72d82d75775c13f83cd08405bb7c5f0306"} Jan 27 07:27:47 crc kubenswrapper[4764]: I0127 07:27:47.390760 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7e9aed5e31a4c4889f84aee84bfaa72d82d75775c13f83cd08405bb7c5f0306" Jan 27 07:27:47 crc kubenswrapper[4764]: I0127 07:27:47.391378 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.118331 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dlghs"] Jan 27 07:27:52 crc kubenswrapper[4764]: E0127 07:27:52.118943 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="extract" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.118957 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="extract" Jan 27 07:27:52 crc kubenswrapper[4764]: E0127 07:27:52.118969 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="pull" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.118974 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="pull" Jan 27 07:27:52 crc kubenswrapper[4764]: E0127 07:27:52.118983 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="util" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.118992 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="util" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.119107 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dce277-e909-47e5-bbae-57b47e19613b" containerName="extract" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.119532 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.121882 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-68vf6" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.122016 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.122361 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.198564 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dlghs"] Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.248295 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvvw\" (UniqueName: \"kubernetes.io/projected/c5651ef7-a219-43f4-b015-d509be4e1e3f-kube-api-access-bjvvw\") pod \"nmstate-operator-646758c888-dlghs\" (UID: \"c5651ef7-a219-43f4-b015-d509be4e1e3f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.350018 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvvw\" (UniqueName: \"kubernetes.io/projected/c5651ef7-a219-43f4-b015-d509be4e1e3f-kube-api-access-bjvvw\") pod \"nmstate-operator-646758c888-dlghs\" (UID: \"c5651ef7-a219-43f4-b015-d509be4e1e3f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.372237 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvvw\" (UniqueName: \"kubernetes.io/projected/c5651ef7-a219-43f4-b015-d509be4e1e3f-kube-api-access-bjvvw\") pod \"nmstate-operator-646758c888-dlghs\" (UID: \"c5651ef7-a219-43f4-b015-d509be4e1e3f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.441763 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" Jan 27 07:27:52 crc kubenswrapper[4764]: I0127 07:27:52.660218 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dlghs"] Jan 27 07:27:53 crc kubenswrapper[4764]: I0127 07:27:53.427988 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" event={"ID":"c5651ef7-a219-43f4-b015-d509be4e1e3f","Type":"ContainerStarted","Data":"aa899022a42905765e9786f9dc8fde29d1986a8733794341e1d6b40409d18d7c"} Jan 27 07:27:55 crc kubenswrapper[4764]: I0127 07:27:55.442076 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" event={"ID":"c5651ef7-a219-43f4-b015-d509be4e1e3f","Type":"ContainerStarted","Data":"aa94987efd546e37178cc932454e49b8d3b5e64efdf97c8cd549e315a8a9d145"} Jan 27 07:27:55 crc kubenswrapper[4764]: I0127 07:27:55.470811 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dlghs" podStartSLOduration=1.581934007 podStartE2EDuration="3.470785552s" podCreationTimestamp="2026-01-27 07:27:52 +0000 UTC" firstStartedPulling="2026-01-27 07:27:52.671610622 +0000 UTC m=+685.267233148" lastFinishedPulling="2026-01-27 07:27:54.560462167 +0000 UTC m=+687.156084693" observedRunningTime="2026-01-27 07:27:55.466303229 +0000 UTC m=+688.061925765" watchObservedRunningTime="2026-01-27 07:27:55.470785552 +0000 UTC m=+688.066408078" Jan 27 07:28:00 crc kubenswrapper[4764]: I0127 07:28:00.991933 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj8zb"] Jan 27 07:28:00 crc kubenswrapper[4764]: I0127 07:28:00.993182 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" Jan 27 07:28:00 crc kubenswrapper[4764]: I0127 07:28:00.995682 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x4pdp" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.009458 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj8zb"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.025761 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.027046 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.029469 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.036892 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sqp2c"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.037916 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.057372 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.080882 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnq8\" (UniqueName: \"kubernetes.io/projected/f7a83740-7cd1-4527-a649-f1c90cf6b280-kube-api-access-shnq8\") pod \"nmstate-metrics-54757c584b-rj8zb\" (UID: \"f7a83740-7cd1-4527-a649-f1c90cf6b280\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.140796 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.142009 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.152877 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.152954 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.154303 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cclgk" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.158554 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181577 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-nmstate-lock\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181620 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-ovs-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181644 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnq8\" (UniqueName: \"kubernetes.io/projected/f7a83740-7cd1-4527-a649-f1c90cf6b280-kube-api-access-shnq8\") pod \"nmstate-metrics-54757c584b-rj8zb\" (UID: \"f7a83740-7cd1-4527-a649-f1c90cf6b280\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181685 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-dbus-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181717 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181736 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jk5h\" (UniqueName: \"kubernetes.io/projected/d3b5797e-4e42-4b80-bb38-f9672697cc0b-kube-api-access-2jk5h\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.181764 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn6x\" (UniqueName: \"kubernetes.io/projected/155fa58c-a112-4cf7-b994-65b5efd97dc6-kube-api-access-4mn6x\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.214723 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnq8\" (UniqueName: \"kubernetes.io/projected/f7a83740-7cd1-4527-a649-f1c90cf6b280-kube-api-access-shnq8\") pod \"nmstate-metrics-54757c584b-rj8zb\" (UID: \"f7a83740-7cd1-4527-a649-f1c90cf6b280\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.282862 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/217f72e1-69f3-4204-9240-c04499e62f42-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.282956 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-dbus-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283013 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283035 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jk5h\" (UniqueName: \"kubernetes.io/projected/d3b5797e-4e42-4b80-bb38-f9672697cc0b-kube-api-access-2jk5h\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283074 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mn6x\" (UniqueName: \"kubernetes.io/projected/155fa58c-a112-4cf7-b994-65b5efd97dc6-kube-api-access-4mn6x\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283092 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/217f72e1-69f3-4204-9240-c04499e62f42-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283125 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-nmstate-lock\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283143 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4xr\" (UniqueName: \"kubernetes.io/projected/217f72e1-69f3-4204-9240-c04499e62f42-kube-api-access-hb4xr\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283170 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-ovs-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283224 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-ovs-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: E0127 07:28:01.283220 4764 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 07:28:01 crc kubenswrapper[4764]: E0127 07:28:01.283351 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair podName:155fa58c-a112-4cf7-b994-65b5efd97dc6 nodeName:}" failed. No retries permitted until 2026-01-27 07:28:01.783325406 +0000 UTC m=+694.378947932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-f8gcn" (UID: "155fa58c-a112-4cf7-b994-65b5efd97dc6") : secret "openshift-nmstate-webhook" not found Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283379 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-dbus-socket\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.283592 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d3b5797e-4e42-4b80-bb38-f9672697cc0b-nmstate-lock\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.313107 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.313534 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mn6x\" (UniqueName: \"kubernetes.io/projected/155fa58c-a112-4cf7-b994-65b5efd97dc6-kube-api-access-4mn6x\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.318215 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jk5h\" (UniqueName: \"kubernetes.io/projected/d3b5797e-4e42-4b80-bb38-f9672697cc0b-kube-api-access-2jk5h\") pod \"nmstate-handler-sqp2c\" (UID: \"d3b5797e-4e42-4b80-bb38-f9672697cc0b\") " pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.352830 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.368222 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f7c969bc5-68nz2"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.369013 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.385041 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/217f72e1-69f3-4204-9240-c04499e62f42-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.385133 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4xr\" (UniqueName: \"kubernetes.io/projected/217f72e1-69f3-4204-9240-c04499e62f42-kube-api-access-hb4xr\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.385200 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/217f72e1-69f3-4204-9240-c04499e62f42-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.386850 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/217f72e1-69f3-4204-9240-c04499e62f42-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.390125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/217f72e1-69f3-4204-9240-c04499e62f42-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.400405 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f7c969bc5-68nz2"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.414509 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4xr\" (UniqueName: \"kubernetes.io/projected/217f72e1-69f3-4204-9240-c04499e62f42-kube-api-access-hb4xr\") pod \"nmstate-console-plugin-7754f76f8b-nvhss\" (UID: \"217f72e1-69f3-4204-9240-c04499e62f42\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.456184 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.484880 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sqp2c" event={"ID":"d3b5797e-4e42-4b80-bb38-f9672697cc0b","Type":"ContainerStarted","Data":"0c5067a3115817be1e2c723e8d1f01524dc72bd15cb75f8633c30895d3e5b5c7"} Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486028 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-oauth-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486073 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-service-ca\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486402 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-oauth-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486546 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486607 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-trusted-ca-bundle\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486729 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.486849 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2nh\" (UniqueName: \"kubernetes.io/projected/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-kube-api-access-xm2nh\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593009 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-oauth-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593596 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593627 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-trusted-ca-bundle\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593719 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593753 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm2nh\" (UniqueName: \"kubernetes.io/projected/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-kube-api-access-xm2nh\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593824 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-oauth-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.593844 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-service-ca\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.595127 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-oauth-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.595473 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.595812 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-trusted-ca-bundle\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.596598 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-service-ca\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.597028 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rj8zb"] Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.601077 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-serving-cert\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.601545 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-console-oauth-config\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.613314 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm2nh\" (UniqueName: \"kubernetes.io/projected/512fb161-73e2-4cd3-ba7c-e6dfd4a418b2-kube-api-access-xm2nh\") pod \"console-7f7c969bc5-68nz2\" (UID: \"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2\") " pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.661931 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss"] Jan 27 07:28:01 crc kubenswrapper[4764]: W0127 07:28:01.665197 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217f72e1_69f3_4204_9240_c04499e62f42.slice/crio-7f09fc27bf9f41d9cdf15c3c5c7ba32d461f1b34b794fb16afe9252b3e67f147 WatchSource:0}: Error finding container 7f09fc27bf9f41d9cdf15c3c5c7ba32d461f1b34b794fb16afe9252b3e67f147: Status 404 returned error can't find the container with id 7f09fc27bf9f41d9cdf15c3c5c7ba32d461f1b34b794fb16afe9252b3e67f147 Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.714391 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.798648 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.801908 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/155fa58c-a112-4cf7-b994-65b5efd97dc6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f8gcn\" (UID: \"155fa58c-a112-4cf7-b994-65b5efd97dc6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.877559 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f7c969bc5-68nz2"] Jan 27 07:28:01 crc kubenswrapper[4764]: W0127 07:28:01.884843 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod512fb161_73e2_4cd3_ba7c_e6dfd4a418b2.slice/crio-72fdb0775cc85e4451fa2f952addc0b7e518e7cff29e28825e4817385bc94685 WatchSource:0}: Error finding container 72fdb0775cc85e4451fa2f952addc0b7e518e7cff29e28825e4817385bc94685: Status 404 returned error can't find the container with id 72fdb0775cc85e4451fa2f952addc0b7e518e7cff29e28825e4817385bc94685 Jan 27 07:28:01 crc kubenswrapper[4764]: I0127 07:28:01.947288 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.365704 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn"] Jan 27 07:28:02 crc kubenswrapper[4764]: W0127 07:28:02.377804 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod155fa58c_a112_4cf7_b994_65b5efd97dc6.slice/crio-b123812497690d58ff42f68128411c6c870126ccf969581bf839942451fafcf8 WatchSource:0}: Error finding container b123812497690d58ff42f68128411c6c870126ccf969581bf839942451fafcf8: Status 404 returned error can't find the container with id b123812497690d58ff42f68128411c6c870126ccf969581bf839942451fafcf8 Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.493140 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f7c969bc5-68nz2" event={"ID":"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2","Type":"ContainerStarted","Data":"fba776a9e107f84146ac8be491a0ea2c95e70eeba63df729de114658dce5ece6"} Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.493213 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f7c969bc5-68nz2" event={"ID":"512fb161-73e2-4cd3-ba7c-e6dfd4a418b2","Type":"ContainerStarted","Data":"72fdb0775cc85e4451fa2f952addc0b7e518e7cff29e28825e4817385bc94685"} Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.495498 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" event={"ID":"217f72e1-69f3-4204-9240-c04499e62f42","Type":"ContainerStarted","Data":"7f09fc27bf9f41d9cdf15c3c5c7ba32d461f1b34b794fb16afe9252b3e67f147"} Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.497051 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" event={"ID":"f7a83740-7cd1-4527-a649-f1c90cf6b280","Type":"ContainerStarted","Data":"637787724f3128b28167f4880b0e372dea377d019d798dc1d7f31bc207534a1d"} Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.498313 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" event={"ID":"155fa58c-a112-4cf7-b994-65b5efd97dc6","Type":"ContainerStarted","Data":"b123812497690d58ff42f68128411c6c870126ccf969581bf839942451fafcf8"} Jan 27 07:28:02 crc kubenswrapper[4764]: I0127 07:28:02.517307 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f7c969bc5-68nz2" podStartSLOduration=1.517273389 podStartE2EDuration="1.517273389s" podCreationTimestamp="2026-01-27 07:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:28:02.510641838 +0000 UTC m=+695.106264384" watchObservedRunningTime="2026-01-27 07:28:02.517273389 +0000 UTC m=+695.112895915" Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.516257 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" event={"ID":"155fa58c-a112-4cf7-b994-65b5efd97dc6","Type":"ContainerStarted","Data":"2e465353087cf9675090a41c82ff955d17e104620d9ec870a094ecbd7bcdb73e"} Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.516733 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.518029 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" event={"ID":"f7a83740-7cd1-4527-a649-f1c90cf6b280","Type":"ContainerStarted","Data":"7eedfecbc01a34ba0bc5405de74af12b6bee9c2ac6aed19d74fc4a514b560d2b"} Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.525818 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sqp2c" event={"ID":"d3b5797e-4e42-4b80-bb38-f9672697cc0b","Type":"ContainerStarted","Data":"2a8fb327a647ded4884fde982c2ff9e012e89cf67015560f99233e18c6188776"} Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.526122 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.557239 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" podStartSLOduration=2.792882317 podStartE2EDuration="4.557215921s" podCreationTimestamp="2026-01-27 07:28:00 +0000 UTC" firstStartedPulling="2026-01-27 07:28:02.382089105 +0000 UTC m=+694.977711631" lastFinishedPulling="2026-01-27 07:28:04.146422709 +0000 UTC m=+696.742045235" observedRunningTime="2026-01-27 07:28:04.536196288 +0000 UTC m=+697.131818854" watchObservedRunningTime="2026-01-27 07:28:04.557215921 +0000 UTC m=+697.152838447" Jan 27 07:28:04 crc kubenswrapper[4764]: I0127 07:28:04.559524 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sqp2c" podStartSLOduration=0.860335988 podStartE2EDuration="3.559519364s" podCreationTimestamp="2026-01-27 07:28:01 +0000 UTC" firstStartedPulling="2026-01-27 07:28:01.412711912 +0000 UTC m=+694.008334438" lastFinishedPulling="2026-01-27 07:28:04.111895268 +0000 UTC m=+696.707517814" observedRunningTime="2026-01-27 07:28:04.555593617 +0000 UTC m=+697.151216203" watchObservedRunningTime="2026-01-27 07:28:04.559519364 +0000 UTC m=+697.155141890" Jan 27 07:28:05 crc kubenswrapper[4764]: I0127 07:28:05.532004 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" event={"ID":"217f72e1-69f3-4204-9240-c04499e62f42","Type":"ContainerStarted","Data":"40c47674db95e6b77a8698d2bf145f39a02fb9fcb9c568de0c9fdb77c766fb52"} Jan 27 07:28:05 crc kubenswrapper[4764]: I0127 07:28:05.552140 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nvhss" podStartSLOduration=1.203296934 podStartE2EDuration="4.55211494s" podCreationTimestamp="2026-01-27 07:28:01 +0000 UTC" firstStartedPulling="2026-01-27 07:28:01.667087183 +0000 UTC m=+694.262709709" lastFinishedPulling="2026-01-27 07:28:05.015905199 +0000 UTC m=+697.611527715" observedRunningTime="2026-01-27 07:28:05.54883701 +0000 UTC m=+698.144459576" watchObservedRunningTime="2026-01-27 07:28:05.55211494 +0000 UTC m=+698.147737466" Jan 27 07:28:07 crc kubenswrapper[4764]: I0127 07:28:07.556225 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" event={"ID":"f7a83740-7cd1-4527-a649-f1c90cf6b280","Type":"ContainerStarted","Data":"37ba466ff2bee9e95a1b3e21df07904f3bd4cc2e0d06ffa4b0200f8633c8b8b6"} Jan 27 07:28:07 crc kubenswrapper[4764]: I0127 07:28:07.581986 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rj8zb" podStartSLOduration=2.579425902 podStartE2EDuration="7.581952148s" podCreationTimestamp="2026-01-27 07:28:00 +0000 UTC" firstStartedPulling="2026-01-27 07:28:01.608048624 +0000 UTC m=+694.203671150" lastFinishedPulling="2026-01-27 07:28:06.61057488 +0000 UTC m=+699.206197396" observedRunningTime="2026-01-27 07:28:07.572838659 +0000 UTC m=+700.168461185" watchObservedRunningTime="2026-01-27 07:28:07.581952148 +0000 UTC m=+700.177574714" Jan 27 07:28:11 crc kubenswrapper[4764]: I0127 07:28:11.377702 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sqp2c" Jan 27 07:28:11 crc kubenswrapper[4764]: I0127 07:28:11.715604 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:11 crc kubenswrapper[4764]: I0127 07:28:11.716496 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:11 crc kubenswrapper[4764]: I0127 07:28:11.722190 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:12 crc kubenswrapper[4764]: I0127 07:28:12.591679 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f7c969bc5-68nz2" Jan 27 07:28:12 crc kubenswrapper[4764]: I0127 07:28:12.647798 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:28:21 crc kubenswrapper[4764]: I0127 07:28:21.954562 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f8gcn" Jan 27 07:28:34 crc kubenswrapper[4764]: I0127 07:28:34.982331 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr"] Jan 27 07:28:34 crc kubenswrapper[4764]: I0127 07:28:34.984031 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:34 crc kubenswrapper[4764]: I0127 07:28:34.987341 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.004503 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr"] Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.071728 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.071972 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.072033 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2nx\" (UniqueName: \"kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.172912 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.172974 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2nx\" (UniqueName: \"kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.173034 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.173544 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.173577 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.192159 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2nx\" (UniqueName: \"kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.304072 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.563460 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr"] Jan 27 07:28:35 crc kubenswrapper[4764]: I0127 07:28:35.823975 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" event={"ID":"a4266b92-4c6a-4651-8b75-a7e6479e5aff","Type":"ContainerStarted","Data":"09aacc780ccc3ed9963b65db793755b06fdc9e2f170d6bbba9aeeab1f7517f1c"} Jan 27 07:28:36 crc kubenswrapper[4764]: I0127 07:28:36.831247 4764 generic.go:334] "Generic (PLEG): container finished" podID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerID="da20e883d30e41444b26ea4e747c9152c06d32786b5974ea443971d5b1318c6f" exitCode=0 Jan 27 07:28:36 crc kubenswrapper[4764]: I0127 07:28:36.831303 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" event={"ID":"a4266b92-4c6a-4651-8b75-a7e6479e5aff","Type":"ContainerDied","Data":"da20e883d30e41444b26ea4e747c9152c06d32786b5974ea443971d5b1318c6f"} Jan 27 07:28:37 crc kubenswrapper[4764]: I0127 07:28:37.700172 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dk6gm" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" containerID="cri-o://016ea56561d88ffec24316ace0edcd8e04bd6e4d057b035e9f15b56fe14cd136" gracePeriod=15 Jan 27 07:28:37 crc kubenswrapper[4764]: I0127 07:28:37.841803 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dk6gm_39f8297e-b534-44ff-9b38-4eb269960b80/console/0.log" Jan 27 07:28:37 crc kubenswrapper[4764]: I0127 07:28:37.842216 4764 generic.go:334] "Generic (PLEG): container finished" podID="39f8297e-b534-44ff-9b38-4eb269960b80" containerID="016ea56561d88ffec24316ace0edcd8e04bd6e4d057b035e9f15b56fe14cd136" exitCode=2 Jan 27 07:28:37 crc kubenswrapper[4764]: I0127 07:28:37.842254 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dk6gm" event={"ID":"39f8297e-b534-44ff-9b38-4eb269960b80","Type":"ContainerDied","Data":"016ea56561d88ffec24316ace0edcd8e04bd6e4d057b035e9f15b56fe14cd136"} Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.093115 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dk6gm_39f8297e-b534-44ff-9b38-4eb269960b80/console/0.log" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.093212 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.119910 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.119983 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.120015 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvvt\" (UniqueName: \"kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.120088 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.120108 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.120141 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.120218 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config\") pod \"39f8297e-b534-44ff-9b38-4eb269960b80\" (UID: \"39f8297e-b534-44ff-9b38-4eb269960b80\") " Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.121325 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config" (OuterVolumeSpecName: "console-config") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.122293 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca" (OuterVolumeSpecName: "service-ca") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.122725 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.123486 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.128625 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt" (OuterVolumeSpecName: "kube-api-access-xtvvt") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "kube-api-access-xtvvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.128969 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.129566 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "39f8297e-b534-44ff-9b38-4eb269960b80" (UID: "39f8297e-b534-44ff-9b38-4eb269960b80"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.221625 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvvt\" (UniqueName: \"kubernetes.io/projected/39f8297e-b534-44ff-9b38-4eb269960b80-kube-api-access-xtvvt\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.221842 4764 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.221976 4764 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.221988 4764 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.222001 4764 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.222064 4764 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39f8297e-b534-44ff-9b38-4eb269960b80-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.222077 4764 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8297e-b534-44ff-9b38-4eb269960b80-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.859276 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dk6gm_39f8297e-b534-44ff-9b38-4eb269960b80/console/0.log" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.860856 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dk6gm" event={"ID":"39f8297e-b534-44ff-9b38-4eb269960b80","Type":"ContainerDied","Data":"c541c00ec476bb2c461065ee95d618c080031bba57e04fa8e9f659191a9615a4"} Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.860919 4764 scope.go:117] "RemoveContainer" containerID="016ea56561d88ffec24316ace0edcd8e04bd6e4d057b035e9f15b56fe14cd136" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.861406 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dk6gm" Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.890507 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:28:38 crc kubenswrapper[4764]: I0127 07:28:38.895297 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dk6gm"] Jan 27 07:28:39 crc kubenswrapper[4764]: I0127 07:28:39.869643 4764 generic.go:334] "Generic (PLEG): container finished" podID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerID="81100cdd8decc3e24907307d46afb2a6131caef0641671553d5bf427b950acda" exitCode=0 Jan 27 07:28:39 crc kubenswrapper[4764]: I0127 07:28:39.869827 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" event={"ID":"a4266b92-4c6a-4651-8b75-a7e6479e5aff","Type":"ContainerDied","Data":"81100cdd8decc3e24907307d46afb2a6131caef0641671553d5bf427b950acda"} Jan 27 07:28:40 crc kubenswrapper[4764]: I0127 07:28:40.453398 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" path="/var/lib/kubelet/pods/39f8297e-b534-44ff-9b38-4eb269960b80/volumes" Jan 27 07:28:40 crc kubenswrapper[4764]: I0127 07:28:40.886155 4764 generic.go:334] "Generic (PLEG): container finished" podID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerID="b423e790e0aac626088c57d120707b6df1a0bcf05f63a4f35c218c74822ceb0b" exitCode=0 Jan 27 07:28:40 crc kubenswrapper[4764]: I0127 07:28:40.886329 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" event={"ID":"a4266b92-4c6a-4651-8b75-a7e6479e5aff","Type":"ContainerDied","Data":"b423e790e0aac626088c57d120707b6df1a0bcf05f63a4f35c218c74822ceb0b"} Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.130351 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.177431 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle\") pod \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.177551 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw2nx\" (UniqueName: \"kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx\") pod \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.177613 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util\") pod \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\" (UID: \"a4266b92-4c6a-4651-8b75-a7e6479e5aff\") " Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.178684 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle" (OuterVolumeSpecName: "bundle") pod "a4266b92-4c6a-4651-8b75-a7e6479e5aff" (UID: "a4266b92-4c6a-4651-8b75-a7e6479e5aff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.188938 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx" (OuterVolumeSpecName: "kube-api-access-tw2nx") pod "a4266b92-4c6a-4651-8b75-a7e6479e5aff" (UID: "a4266b92-4c6a-4651-8b75-a7e6479e5aff"). InnerVolumeSpecName "kube-api-access-tw2nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.195699 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util" (OuterVolumeSpecName: "util") pod "a4266b92-4c6a-4651-8b75-a7e6479e5aff" (UID: "a4266b92-4c6a-4651-8b75-a7e6479e5aff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.278667 4764 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.278728 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw2nx\" (UniqueName: \"kubernetes.io/projected/a4266b92-4c6a-4651-8b75-a7e6479e5aff-kube-api-access-tw2nx\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.278748 4764 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4266b92-4c6a-4651-8b75-a7e6479e5aff-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.904107 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" event={"ID":"a4266b92-4c6a-4651-8b75-a7e6479e5aff","Type":"ContainerDied","Data":"09aacc780ccc3ed9963b65db793755b06fdc9e2f170d6bbba9aeeab1f7517f1c"} Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.904179 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09aacc780ccc3ed9963b65db793755b06fdc9e2f170d6bbba9aeeab1f7517f1c" Jan 27 07:28:42 crc kubenswrapper[4764]: I0127 07:28:42.904149 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.342065 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk"] Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.343263 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="pull" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343285 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="pull" Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.343328 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="extract" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343339 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="extract" Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.343366 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343375 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.343385 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="util" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343393 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="util" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343566 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4266b92-4c6a-4651-8b75-a7e6479e5aff" containerName="extract" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.343581 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f8297e-b534-44ff-9b38-4eb269960b80" containerName="console" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.344148 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: W0127 07:28:53.346979 4764 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.347170 4764 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.347851 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.348064 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.348403 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.352785 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ttjj2" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.356022 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk"] Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.437945 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.438028 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmjc7\" (UniqueName: \"kubernetes.io/projected/01f5c524-c371-4487-a1cc-619a76dba209-kube-api-access-xmjc7\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.438077 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.539661 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.539751 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmjc7\" (UniqueName: \"kubernetes.io/projected/01f5c524-c371-4487-a1cc-619a76dba209-kube-api-access-xmjc7\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.539794 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.567193 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmjc7\" (UniqueName: \"kubernetes.io/projected/01f5c524-c371-4487-a1cc-619a76dba209-kube-api-access-xmjc7\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.575420 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn"] Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.576219 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: W0127 07:28:53.578986 4764 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: secrets "metallb-webhook-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.579055 4764 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-webhook-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:28:53 crc kubenswrapper[4764]: W0127 07:28:53.579173 4764 reflector.go:561] object-"metallb-system"/"controller-dockercfg-frpm9": failed to list *v1.Secret: secrets "controller-dockercfg-frpm9" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.579193 4764 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-frpm9\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-dockercfg-frpm9\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:28:53 crc kubenswrapper[4764]: W0127 07:28:53.579387 4764 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 07:28:53 crc kubenswrapper[4764]: E0127 07:28:53.579461 4764 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.619214 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn"] Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.640699 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.640822 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.640849 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4bn\" (UniqueName: \"kubernetes.io/projected/6dcc9557-47dd-412a-931a-e51cae97b1eb-kube-api-access-7d4bn\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.742270 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.742349 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.742372 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4bn\" (UniqueName: \"kubernetes.io/projected/6dcc9557-47dd-412a-931a-e51cae97b1eb-kube-api-access-7d4bn\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:53 crc kubenswrapper[4764]: I0127 07:28:53.762358 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4bn\" (UniqueName: \"kubernetes.io/projected/6dcc9557-47dd-412a-931a-e51cae97b1eb-kube-api-access-7d4bn\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.540860 4764 secret.go:188] Couldn't get secret metallb-system/metallb-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.542691 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert podName:01f5c524-c371-4487-a1cc-619a76dba209 nodeName:}" failed. No retries permitted until 2026-01-27 07:28:55.042663818 +0000 UTC m=+747.638286344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert") pod "metallb-operator-controller-manager-748c4765c-2x4vk" (UID: "01f5c524-c371-4487-a1cc-619a76dba209") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.540871 4764 secret.go:188] Couldn't get secret metallb-system/metallb-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.542922 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert podName:01f5c524-c371-4487-a1cc-619a76dba209 nodeName:}" failed. No retries permitted until 2026-01-27 07:28:55.042909865 +0000 UTC m=+747.638532391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert") pod "metallb-operator-controller-manager-748c4765c-2x4vk" (UID: "01f5c524-c371-4487-a1cc-619a76dba209") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: I0127 07:28:54.715998 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.743033 4764 secret.go:188] Couldn't get secret metallb-system/metallb-operator-webhook-server-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.743184 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert podName:6dcc9557-47dd-412a-931a-e51cae97b1eb nodeName:}" failed. No retries permitted until 2026-01-27 07:28:55.243155821 +0000 UTC m=+747.838778347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert") pod "metallb-operator-webhook-server-78b8897d5b-w8jsn" (UID: "6dcc9557-47dd-412a-931a-e51cae97b1eb") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.743063 4764 secret.go:188] Couldn't get secret metallb-system/metallb-operator-webhook-server-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: E0127 07:28:54.743308 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert podName:6dcc9557-47dd-412a-931a-e51cae97b1eb nodeName:}" failed. No retries permitted until 2026-01-27 07:28:55.243272824 +0000 UTC m=+747.838895530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert") pod "metallb-operator-webhook-server-78b8897d5b-w8jsn" (UID: "6dcc9557-47dd-412a-931a-e51cae97b1eb") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:28:54 crc kubenswrapper[4764]: I0127 07:28:54.784835 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-frpm9" Jan 27 07:28:54 crc kubenswrapper[4764]: I0127 07:28:54.994869 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.018539 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.062704 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.062830 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.068604 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-webhook-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.071819 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01f5c524-c371-4487-a1cc-619a76dba209-apiservice-cert\") pod \"metallb-operator-controller-manager-748c4765c-2x4vk\" (UID: \"01f5c524-c371-4487-a1cc-619a76dba209\") " pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.161828 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.265692 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.265781 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.271806 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-apiservice-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.272349 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dcc9557-47dd-412a-931a-e51cae97b1eb-webhook-cert\") pod \"metallb-operator-webhook-server-78b8897d5b-w8jsn\" (UID: \"6dcc9557-47dd-412a-931a-e51cae97b1eb\") " pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.385657 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk"] Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.407391 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.725271 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn"] Jan 27 07:28:55 crc kubenswrapper[4764]: W0127 07:28:55.731863 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dcc9557_47dd_412a_931a_e51cae97b1eb.slice/crio-9cf25bd1a54f71945703ea3dfd09e92dcd1a858628b167b12c629ffe6e0f2427 WatchSource:0}: Error finding container 9cf25bd1a54f71945703ea3dfd09e92dcd1a858628b167b12c629ffe6e0f2427: Status 404 returned error can't find the container with id 9cf25bd1a54f71945703ea3dfd09e92dcd1a858628b167b12c629ffe6e0f2427 Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.992074 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" event={"ID":"6dcc9557-47dd-412a-931a-e51cae97b1eb","Type":"ContainerStarted","Data":"9cf25bd1a54f71945703ea3dfd09e92dcd1a858628b167b12c629ffe6e0f2427"} Jan 27 07:28:55 crc kubenswrapper[4764]: I0127 07:28:55.993783 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" event={"ID":"01f5c524-c371-4487-a1cc-619a76dba209","Type":"ContainerStarted","Data":"f8475981952fa3c9974a1166cac41554ebdf2ce11cfa49988d6aaf37133e09e2"} Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.034759 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" event={"ID":"6dcc9557-47dd-412a-931a-e51cae97b1eb","Type":"ContainerStarted","Data":"249748cd3c7afb25be04445b5a0a9a30d0ebff8703a041b3c41fa85b0649e701"} Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.035812 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.037215 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" event={"ID":"01f5c524-c371-4487-a1cc-619a76dba209","Type":"ContainerStarted","Data":"6116d8deea29c639e2a5663af94c040904c78b7cd1317429bbaa9b164e886ca4"} Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.037363 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.057679 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" podStartSLOduration=3.227958987 podStartE2EDuration="8.057657384s" podCreationTimestamp="2026-01-27 07:28:53 +0000 UTC" firstStartedPulling="2026-01-27 07:28:55.735859229 +0000 UTC m=+748.331481755" lastFinishedPulling="2026-01-27 07:29:00.565557626 +0000 UTC m=+753.161180152" observedRunningTime="2026-01-27 07:29:01.05530641 +0000 UTC m=+753.650928936" watchObservedRunningTime="2026-01-27 07:29:01.057657384 +0000 UTC m=+753.653279910" Jan 27 07:29:01 crc kubenswrapper[4764]: I0127 07:29:01.528341 4764 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:29:15 crc kubenswrapper[4764]: I0127 07:29:15.417142 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78b8897d5b-w8jsn" Jan 27 07:29:15 crc kubenswrapper[4764]: I0127 07:29:15.450683 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" podStartSLOduration=17.306868681 podStartE2EDuration="22.450657206s" podCreationTimestamp="2026-01-27 07:28:53 +0000 UTC" firstStartedPulling="2026-01-27 07:28:55.400232044 +0000 UTC m=+747.995854570" lastFinishedPulling="2026-01-27 07:29:00.544020559 +0000 UTC m=+753.139643095" observedRunningTime="2026-01-27 07:29:01.080414614 +0000 UTC m=+753.676037140" watchObservedRunningTime="2026-01-27 07:29:15.450657206 +0000 UTC m=+768.046279732" Jan 27 07:29:23 crc kubenswrapper[4764]: I0127 07:29:23.762855 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:29:23 crc kubenswrapper[4764]: I0127 07:29:23.763760 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.165630 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-748c4765c-2x4vk" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.855985 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wvhvc"] Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.858536 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.861293 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ccq5v" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.861606 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.861982 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.873802 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb"] Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.874906 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.880028 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.897755 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb"] Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.958668 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-startup\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.962979 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.963921 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.964201 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgt7\" (UniqueName: \"kubernetes.io/projected/26672501-4cd3-4eb2-b893-46badfedbd56-kube-api-access-qtgt7\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.964426 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-reloader\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.964591 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-conf\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.964711 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics-certs\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.964927 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-sockets\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:35 crc kubenswrapper[4764]: I0127 07:29:35.965038 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdd4\" (UniqueName: \"kubernetes.io/projected/2160d959-833f-4b0d-bfbf-07f8884bbe35-kube-api-access-8sdd4\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.008172 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-slnqc"] Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.009354 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.012239 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.012670 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.012924 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.013663 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fw2wc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.026027 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dz79t"] Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.027166 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.029147 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.041495 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dz79t"] Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066202 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066252 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-sockets\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066273 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sdd4\" (UniqueName: \"kubernetes.io/projected/2160d959-833f-4b0d-bfbf-07f8884bbe35-kube-api-access-8sdd4\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066291 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-metrics-certs\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066311 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-startup\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066331 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066351 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066374 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066399 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ql6d\" (UniqueName: \"kubernetes.io/projected/3e26bcb4-4093-406d-a87c-05f3873ec3f7-kube-api-access-8ql6d\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066421 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgt7\" (UniqueName: \"kubernetes.io/projected/26672501-4cd3-4eb2-b893-46badfedbd56-kube-api-access-qtgt7\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066455 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-cert\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066476 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfkh\" (UniqueName: \"kubernetes.io/projected/4e2f3c15-fd12-4b82-a070-776cce0272b1-kube-api-access-6kfkh\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066504 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e2f3c15-fd12-4b82-a070-776cce0272b1-metallb-excludel2\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066530 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-reloader\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066553 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-conf\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.066570 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics-certs\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.067138 4764 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.067382 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert podName:26672501-4cd3-4eb2-b893-46badfedbd56 nodeName:}" failed. No retries permitted until 2026-01-27 07:29:36.567350382 +0000 UTC m=+789.162972898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert") pod "frr-k8s-webhook-server-7df86c4f6c-4ktfb" (UID: "26672501-4cd3-4eb2-b893-46badfedbd56") : secret "frr-k8s-webhook-server-cert" not found Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.067967 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-sockets\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.068299 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-reloader\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.068362 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-conf\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.068480 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.069143 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2160d959-833f-4b0d-bfbf-07f8884bbe35-frr-startup\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.073147 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2160d959-833f-4b0d-bfbf-07f8884bbe35-metrics-certs\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.083795 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgt7\" (UniqueName: \"kubernetes.io/projected/26672501-4cd3-4eb2-b893-46badfedbd56-kube-api-access-qtgt7\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.084279 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sdd4\" (UniqueName: \"kubernetes.io/projected/2160d959-833f-4b0d-bfbf-07f8884bbe35-kube-api-access-8sdd4\") pod \"frr-k8s-wvhvc\" (UID: \"2160d959-833f-4b0d-bfbf-07f8884bbe35\") " pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168048 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168591 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ql6d\" (UniqueName: \"kubernetes.io/projected/3e26bcb4-4093-406d-a87c-05f3873ec3f7-kube-api-access-8ql6d\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168648 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-cert\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168680 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfkh\" (UniqueName: \"kubernetes.io/projected/4e2f3c15-fd12-4b82-a070-776cce0272b1-kube-api-access-6kfkh\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168728 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e2f3c15-fd12-4b82-a070-776cce0272b1-metallb-excludel2\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168790 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.168817 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-metrics-certs\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.169606 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e2f3c15-fd12-4b82-a070-776cce0272b1-metallb-excludel2\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.168260 4764 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.169698 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist podName:4e2f3c15-fd12-4b82-a070-776cce0272b1 nodeName:}" failed. No retries permitted until 2026-01-27 07:29:36.669679873 +0000 UTC m=+789.265302399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist") pod "speaker-slnqc" (UID: "4e2f3c15-fd12-4b82-a070-776cce0272b1") : secret "metallb-memberlist" not found Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.169200 4764 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.169918 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs podName:3e26bcb4-4093-406d-a87c-05f3873ec3f7 nodeName:}" failed. No retries permitted until 2026-01-27 07:29:36.669892178 +0000 UTC m=+789.265514704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs") pod "controller-6968d8fdc4-dz79t" (UID: "3e26bcb4-4093-406d-a87c-05f3873ec3f7") : secret "controller-certs-secret" not found Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.170697 4764 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.175317 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-metrics-certs\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.176742 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.182863 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-cert\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.186113 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ql6d\" (UniqueName: \"kubernetes.io/projected/3e26bcb4-4093-406d-a87c-05f3873ec3f7-kube-api-access-8ql6d\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.200851 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfkh\" (UniqueName: \"kubernetes.io/projected/4e2f3c15-fd12-4b82-a070-776cce0272b1-kube-api-access-6kfkh\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.575853 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.580156 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26672501-4cd3-4eb2-b893-46badfedbd56-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4ktfb\" (UID: \"26672501-4cd3-4eb2-b893-46badfedbd56\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.677759 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.677852 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.678064 4764 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:29:36 crc kubenswrapper[4764]: E0127 07:29:36.678155 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist podName:4e2f3c15-fd12-4b82-a070-776cce0272b1 nodeName:}" failed. No retries permitted until 2026-01-27 07:29:37.678134029 +0000 UTC m=+790.273756555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist") pod "speaker-slnqc" (UID: "4e2f3c15-fd12-4b82-a070-776cce0272b1") : secret "metallb-memberlist" not found Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.681935 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e26bcb4-4093-406d-a87c-05f3873ec3f7-metrics-certs\") pod \"controller-6968d8fdc4-dz79t\" (UID: \"3e26bcb4-4093-406d-a87c-05f3873ec3f7\") " pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.793814 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:36 crc kubenswrapper[4764]: I0127 07:29:36.943084 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.176016 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dz79t"] Jan 27 07:29:37 crc kubenswrapper[4764]: W0127 07:29:37.182161 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e26bcb4_4093_406d_a87c_05f3873ec3f7.slice/crio-b17ab8df6d2c9ec6a654edd270bddcc9e6b5ba9487fdf0d54a1e4bf67d0ac9b5 WatchSource:0}: Error finding container b17ab8df6d2c9ec6a654edd270bddcc9e6b5ba9487fdf0d54a1e4bf67d0ac9b5: Status 404 returned error can't find the container with id b17ab8df6d2c9ec6a654edd270bddcc9e6b5ba9487fdf0d54a1e4bf67d0ac9b5 Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.272289 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dz79t" event={"ID":"3e26bcb4-4093-406d-a87c-05f3873ec3f7","Type":"ContainerStarted","Data":"b17ab8df6d2c9ec6a654edd270bddcc9e6b5ba9487fdf0d54a1e4bf67d0ac9b5"} Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.273166 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"ac7fa35afd005225336b5f7e876fd5ba8c8ab234116c3897aa1429b104e3cc7c"} Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.321910 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb"] Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.731706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.737894 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e2f3c15-fd12-4b82-a070-776cce0272b1-memberlist\") pod \"speaker-slnqc\" (UID: \"4e2f3c15-fd12-4b82-a070-776cce0272b1\") " pod="metallb-system/speaker-slnqc" Jan 27 07:29:37 crc kubenswrapper[4764]: I0127 07:29:37.823626 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-slnqc" Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.306989 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dz79t" event={"ID":"3e26bcb4-4093-406d-a87c-05f3873ec3f7","Type":"ContainerStarted","Data":"790a1065cb2448c7e80bcd08ab25a0aedd788662a3ea6370b0c566f036e51ea6"} Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.307334 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dz79t" event={"ID":"3e26bcb4-4093-406d-a87c-05f3873ec3f7","Type":"ContainerStarted","Data":"15c22d6e6554d4c057b1bc7dbbdffdb5571b0e4722cc98c6626ac6ebcd4f81b0"} Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.308393 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.310621 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slnqc" event={"ID":"4e2f3c15-fd12-4b82-a070-776cce0272b1","Type":"ContainerStarted","Data":"5a223e386dc6aedf3a439ae48f5d8e54e2a8f677e859d1f019e6bf85b1a97f7b"} Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.310654 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slnqc" event={"ID":"4e2f3c15-fd12-4b82-a070-776cce0272b1","Type":"ContainerStarted","Data":"725f94315a51594a0565f8ed8cb02cf99899d0062ea28c9a9666d0da0d65df5a"} Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.316227 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" event={"ID":"26672501-4cd3-4eb2-b893-46badfedbd56","Type":"ContainerStarted","Data":"ce7b2113e77f4b5c8288f9dc7d6337ae73b2903aa6d06f1c4a0264bbf5d5a34a"} Jan 27 07:29:38 crc kubenswrapper[4764]: I0127 07:29:38.350648 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dz79t" podStartSLOduration=2.3506179 podStartE2EDuration="2.3506179s" podCreationTimestamp="2026-01-27 07:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:29:38.33208222 +0000 UTC m=+790.927704756" watchObservedRunningTime="2026-01-27 07:29:38.3506179 +0000 UTC m=+790.946240606" Jan 27 07:29:39 crc kubenswrapper[4764]: I0127 07:29:39.339103 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slnqc" event={"ID":"4e2f3c15-fd12-4b82-a070-776cce0272b1","Type":"ContainerStarted","Data":"876e0c79926e839f882e16969275fc5fbebd6dc287195757adfecbc757d62085"} Jan 27 07:29:39 crc kubenswrapper[4764]: I0127 07:29:39.372258 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-slnqc" podStartSLOduration=4.372227121 podStartE2EDuration="4.372227121s" podCreationTimestamp="2026-01-27 07:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:29:39.367123824 +0000 UTC m=+791.962746360" watchObservedRunningTime="2026-01-27 07:29:39.372227121 +0000 UTC m=+791.967849667" Jan 27 07:29:40 crc kubenswrapper[4764]: I0127 07:29:40.346346 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-slnqc" Jan 27 07:29:45 crc kubenswrapper[4764]: I0127 07:29:45.389764 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" event={"ID":"26672501-4cd3-4eb2-b893-46badfedbd56","Type":"ContainerStarted","Data":"e451d46838df92bd89fbee2937c32025f6601bc15367ba1fe536bd713d1a09ba"} Jan 27 07:29:45 crc kubenswrapper[4764]: I0127 07:29:45.390483 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:45 crc kubenswrapper[4764]: I0127 07:29:45.393664 4764 generic.go:334] "Generic (PLEG): container finished" podID="2160d959-833f-4b0d-bfbf-07f8884bbe35" containerID="5c242c52874d5c5dde24ed345961d98f58076e1a1cdd3d158e68c96874598132" exitCode=0 Jan 27 07:29:45 crc kubenswrapper[4764]: I0127 07:29:45.393701 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerDied","Data":"5c242c52874d5c5dde24ed345961d98f58076e1a1cdd3d158e68c96874598132"} Jan 27 07:29:45 crc kubenswrapper[4764]: I0127 07:29:45.423579 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" podStartSLOduration=3.414713411 podStartE2EDuration="10.423549071s" podCreationTimestamp="2026-01-27 07:29:35 +0000 UTC" firstStartedPulling="2026-01-27 07:29:37.327084523 +0000 UTC m=+789.922707049" lastFinishedPulling="2026-01-27 07:29:44.335920183 +0000 UTC m=+796.931542709" observedRunningTime="2026-01-27 07:29:45.417962362 +0000 UTC m=+798.013584908" watchObservedRunningTime="2026-01-27 07:29:45.423549071 +0000 UTC m=+798.019171597" Jan 27 07:29:46 crc kubenswrapper[4764]: I0127 07:29:46.404623 4764 generic.go:334] "Generic (PLEG): container finished" podID="2160d959-833f-4b0d-bfbf-07f8884bbe35" containerID="cfbf96f147f4351ab3780e377d8ef7d3b5c6f4007401e6192edf55a524d02dbe" exitCode=0 Jan 27 07:29:46 crc kubenswrapper[4764]: I0127 07:29:46.404740 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerDied","Data":"cfbf96f147f4351ab3780e377d8ef7d3b5c6f4007401e6192edf55a524d02dbe"} Jan 27 07:29:47 crc kubenswrapper[4764]: I0127 07:29:47.423029 4764 generic.go:334] "Generic (PLEG): container finished" podID="2160d959-833f-4b0d-bfbf-07f8884bbe35" containerID="cacfa29edf6407d6f20880ea2c13992cf5b1a09629ac5a6b1fc6ad2e31f7d005" exitCode=0 Jan 27 07:29:47 crc kubenswrapper[4764]: I0127 07:29:47.423080 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerDied","Data":"cacfa29edf6407d6f20880ea2c13992cf5b1a09629ac5a6b1fc6ad2e31f7d005"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.433710 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"8cb79cdcfe93e394f59d45a69fff4ac7d51124d52640071f97b4c98ecf98430b"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434181 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"9d919706ff08d41b761bf9bd520ef82f3ac79035f63491c3dff6428734864f1a"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434196 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"18643e0e27f117d92b711e726a4c09f37cd8bd12ef5f6792e894900aa94274de"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434208 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"bd4f829f76904c6be5b11e111122dea48d4885df4ee677bb19b2db61cf074873"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434335 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434350 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"8fd7138d25f0e7c59c539fe2da79a6a0a3c4279cf5e9d3b4a2318b376a0ccfc3"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.434361 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wvhvc" event={"ID":"2160d959-833f-4b0d-bfbf-07f8884bbe35","Type":"ContainerStarted","Data":"e65fe0ec5066be536f12abf387ce4f36c3d062b06981790e1d457d72c1c00451"} Jan 27 07:29:48 crc kubenswrapper[4764]: I0127 07:29:48.460836 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wvhvc" podStartSLOduration=5.485142965 podStartE2EDuration="13.460816183s" podCreationTimestamp="2026-01-27 07:29:35 +0000 UTC" firstStartedPulling="2026-01-27 07:29:36.351676952 +0000 UTC m=+788.947299478" lastFinishedPulling="2026-01-27 07:29:44.32735017 +0000 UTC m=+796.922972696" observedRunningTime="2026-01-27 07:29:48.460472685 +0000 UTC m=+801.056095211" watchObservedRunningTime="2026-01-27 07:29:48.460816183 +0000 UTC m=+801.056438709" Jan 27 07:29:51 crc kubenswrapper[4764]: I0127 07:29:51.177786 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:51 crc kubenswrapper[4764]: I0127 07:29:51.253121 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:29:53 crc kubenswrapper[4764]: I0127 07:29:53.763230 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:29:53 crc kubenswrapper[4764]: I0127 07:29:53.763354 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:29:56 crc kubenswrapper[4764]: I0127 07:29:56.802680 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4ktfb" Jan 27 07:29:56 crc kubenswrapper[4764]: I0127 07:29:56.948502 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dz79t" Jan 27 07:29:57 crc kubenswrapper[4764]: I0127 07:29:57.831349 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-slnqc" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.184941 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl"] Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.186590 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.189461 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.191829 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.199523 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl"] Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.309771 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.310055 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwv27\" (UniqueName: \"kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.310117 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.412482 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.412583 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwv27\" (UniqueName: \"kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.412607 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.413917 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.421077 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.438307 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwv27\" (UniqueName: \"kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27\") pod \"collect-profiles-29491650-tbtxl\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.520537 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.604035 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.605048 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.607008 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.607310 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.611394 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-76hzn" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.616532 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.716743 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57x9\" (UniqueName: \"kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9\") pod \"openstack-operator-index-2t247\" (UID: \"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce\") " pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.819124 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57x9\" (UniqueName: \"kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9\") pod \"openstack-operator-index-2t247\" (UID: \"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce\") " pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.834387 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl"] Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.842845 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57x9\" (UniqueName: \"kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9\") pod \"openstack-operator-index-2t247\" (UID: \"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce\") " pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:00 crc kubenswrapper[4764]: I0127 07:30:00.931586 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:01 crc kubenswrapper[4764]: W0127 07:30:01.340411 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd1b0ce_95fc_4841_b871_1b1d0d3da7ce.slice/crio-f9a783183253bab8712c153059d24108f507ba5c4829fde4cd5add88cae6d3de WatchSource:0}: Error finding container f9a783183253bab8712c153059d24108f507ba5c4829fde4cd5add88cae6d3de: Status 404 returned error can't find the container with id f9a783183253bab8712c153059d24108f507ba5c4829fde4cd5add88cae6d3de Jan 27 07:30:01 crc kubenswrapper[4764]: I0127 07:30:01.344414 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:01 crc kubenswrapper[4764]: I0127 07:30:01.561144 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" event={"ID":"2e020486-9ae1-40fe-8d8a-075c436443cb","Type":"ContainerStarted","Data":"58b37411453455e75814b6dc90aa64a076f49a4556cdaa984d10dfbbd9dbc73c"} Jan 27 07:30:01 crc kubenswrapper[4764]: I0127 07:30:01.561204 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" event={"ID":"2e020486-9ae1-40fe-8d8a-075c436443cb","Type":"ContainerStarted","Data":"ef36594549f180ef60408ea4eb79de206135adf3b995fbdfc2761d6e184ba637"} Jan 27 07:30:01 crc kubenswrapper[4764]: I0127 07:30:01.563617 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2t247" event={"ID":"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce","Type":"ContainerStarted","Data":"f9a783183253bab8712c153059d24108f507ba5c4829fde4cd5add88cae6d3de"} Jan 27 07:30:01 crc kubenswrapper[4764]: I0127 07:30:01.588147 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" podStartSLOduration=1.588109258 podStartE2EDuration="1.588109258s" podCreationTimestamp="2026-01-27 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:30:01.580257373 +0000 UTC m=+814.175879899" watchObservedRunningTime="2026-01-27 07:30:01.588109258 +0000 UTC m=+814.183731784" Jan 27 07:30:02 crc kubenswrapper[4764]: I0127 07:30:02.574748 4764 generic.go:334] "Generic (PLEG): container finished" podID="2e020486-9ae1-40fe-8d8a-075c436443cb" containerID="58b37411453455e75814b6dc90aa64a076f49a4556cdaa984d10dfbbd9dbc73c" exitCode=0 Jan 27 07:30:02 crc kubenswrapper[4764]: I0127 07:30:02.574830 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" event={"ID":"2e020486-9ae1-40fe-8d8a-075c436443cb","Type":"ContainerDied","Data":"58b37411453455e75814b6dc90aa64a076f49a4556cdaa984d10dfbbd9dbc73c"} Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.584526 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2t247" event={"ID":"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce","Type":"ContainerStarted","Data":"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1"} Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.609637 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2t247" podStartSLOduration=2.067220716 podStartE2EDuration="3.609606246s" podCreationTimestamp="2026-01-27 07:30:00 +0000 UTC" firstStartedPulling="2026-01-27 07:30:01.343030152 +0000 UTC m=+813.938652698" lastFinishedPulling="2026-01-27 07:30:02.885415692 +0000 UTC m=+815.481038228" observedRunningTime="2026-01-27 07:30:03.601289089 +0000 UTC m=+816.196911695" watchObservedRunningTime="2026-01-27 07:30:03.609606246 +0000 UTC m=+816.205228772" Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.882037 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.973474 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume\") pod \"2e020486-9ae1-40fe-8d8a-075c436443cb\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.973609 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwv27\" (UniqueName: \"kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27\") pod \"2e020486-9ae1-40fe-8d8a-075c436443cb\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.973636 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume\") pod \"2e020486-9ae1-40fe-8d8a-075c436443cb\" (UID: \"2e020486-9ae1-40fe-8d8a-075c436443cb\") " Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.978562 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e020486-9ae1-40fe-8d8a-075c436443cb" (UID: "2e020486-9ae1-40fe-8d8a-075c436443cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.983705 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e020486-9ae1-40fe-8d8a-075c436443cb" (UID: "2e020486-9ae1-40fe-8d8a-075c436443cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.989602 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27" (OuterVolumeSpecName: "kube-api-access-bwv27") pod "2e020486-9ae1-40fe-8d8a-075c436443cb" (UID: "2e020486-9ae1-40fe-8d8a-075c436443cb"). InnerVolumeSpecName "kube-api-access-bwv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:30:03 crc kubenswrapper[4764]: I0127 07:30:03.992329 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.075296 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwv27\" (UniqueName: \"kubernetes.io/projected/2e020486-9ae1-40fe-8d8a-075c436443cb-kube-api-access-bwv27\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.075347 4764 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e020486-9ae1-40fe-8d8a-075c436443cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.075368 4764 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e020486-9ae1-40fe-8d8a-075c436443cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.580246 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tx6xn"] Jan 27 07:30:04 crc kubenswrapper[4764]: E0127 07:30:04.580578 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e020486-9ae1-40fe-8d8a-075c436443cb" containerName="collect-profiles" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.580593 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e020486-9ae1-40fe-8d8a-075c436443cb" containerName="collect-profiles" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.580725 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e020486-9ae1-40fe-8d8a-075c436443cb" containerName="collect-profiles" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.581214 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.591804 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tx6xn"] Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.596376 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.597138 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491650-tbtxl" event={"ID":"2e020486-9ae1-40fe-8d8a-075c436443cb","Type":"ContainerDied","Data":"ef36594549f180ef60408ea4eb79de206135adf3b995fbdfc2761d6e184ba637"} Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.597170 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef36594549f180ef60408ea4eb79de206135adf3b995fbdfc2761d6e184ba637" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.684710 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzt62\" (UniqueName: \"kubernetes.io/projected/1c35ba5a-68f5-474a-925e-f7580994a34c-kube-api-access-tzt62\") pod \"openstack-operator-index-tx6xn\" (UID: \"1c35ba5a-68f5-474a-925e-f7580994a34c\") " pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.786538 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzt62\" (UniqueName: \"kubernetes.io/projected/1c35ba5a-68f5-474a-925e-f7580994a34c-kube-api-access-tzt62\") pod \"openstack-operator-index-tx6xn\" (UID: \"1c35ba5a-68f5-474a-925e-f7580994a34c\") " pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.807816 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzt62\" (UniqueName: \"kubernetes.io/projected/1c35ba5a-68f5-474a-925e-f7580994a34c-kube-api-access-tzt62\") pod \"openstack-operator-index-tx6xn\" (UID: \"1c35ba5a-68f5-474a-925e-f7580994a34c\") " pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:04 crc kubenswrapper[4764]: I0127 07:30:04.912581 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:05 crc kubenswrapper[4764]: I0127 07:30:05.296120 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tx6xn"] Jan 27 07:30:05 crc kubenswrapper[4764]: W0127 07:30:05.304056 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c35ba5a_68f5_474a_925e_f7580994a34c.slice/crio-f6861509b667c51447549ee9ad6bd71c82ff8fb264379c02305da353e6f5870d WatchSource:0}: Error finding container f6861509b667c51447549ee9ad6bd71c82ff8fb264379c02305da353e6f5870d: Status 404 returned error can't find the container with id f6861509b667c51447549ee9ad6bd71c82ff8fb264379c02305da353e6f5870d Jan 27 07:30:05 crc kubenswrapper[4764]: I0127 07:30:05.606090 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2t247" podUID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" containerName="registry-server" containerID="cri-o://15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1" gracePeriod=2 Jan 27 07:30:05 crc kubenswrapper[4764]: I0127 07:30:05.606584 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tx6xn" event={"ID":"1c35ba5a-68f5-474a-925e-f7580994a34c","Type":"ContainerStarted","Data":"f6861509b667c51447549ee9ad6bd71c82ff8fb264379c02305da353e6f5870d"} Jan 27 07:30:05 crc kubenswrapper[4764]: I0127 07:30:05.997240 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.104264 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57x9\" (UniqueName: \"kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9\") pod \"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce\" (UID: \"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce\") " Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.109057 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9" (OuterVolumeSpecName: "kube-api-access-v57x9") pod "2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" (UID: "2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce"). InnerVolumeSpecName "kube-api-access-v57x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.181317 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wvhvc" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.206499 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57x9\" (UniqueName: \"kubernetes.io/projected/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce-kube-api-access-v57x9\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.614805 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tx6xn" event={"ID":"1c35ba5a-68f5-474a-925e-f7580994a34c","Type":"ContainerStarted","Data":"542d5a1e7b8294823e3b981ef0c5690491e8dcadfd70e57fbe2e104c0dad0673"} Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.617622 4764 generic.go:334] "Generic (PLEG): container finished" podID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" containerID="15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1" exitCode=0 Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.617665 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2t247" event={"ID":"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce","Type":"ContainerDied","Data":"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1"} Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.617720 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2t247" event={"ID":"2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce","Type":"ContainerDied","Data":"f9a783183253bab8712c153059d24108f507ba5c4829fde4cd5add88cae6d3de"} Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.617736 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2t247" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.617745 4764 scope.go:117] "RemoveContainer" containerID="15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.639541 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tx6xn" podStartSLOduration=2.19911944 podStartE2EDuration="2.639519866s" podCreationTimestamp="2026-01-27 07:30:04 +0000 UTC" firstStartedPulling="2026-01-27 07:30:05.306653368 +0000 UTC m=+817.902275894" lastFinishedPulling="2026-01-27 07:30:05.747053784 +0000 UTC m=+818.342676320" observedRunningTime="2026-01-27 07:30:06.636298046 +0000 UTC m=+819.231920582" watchObservedRunningTime="2026-01-27 07:30:06.639519866 +0000 UTC m=+819.235142392" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.645092 4764 scope.go:117] "RemoveContainer" containerID="15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1" Jan 27 07:30:06 crc kubenswrapper[4764]: E0127 07:30:06.646015 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1\": container with ID starting with 15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1 not found: ID does not exist" containerID="15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.646048 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1"} err="failed to get container status \"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1\": rpc error: code = NotFound desc = could not find container \"15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1\": container with ID starting with 15fd75449dca1883cdda80caac7e3fad171dca1abb359885604e93dfb28d13c1 not found: ID does not exist" Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.658542 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:06 crc kubenswrapper[4764]: I0127 07:30:06.662190 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2t247"] Jan 27 07:30:08 crc kubenswrapper[4764]: I0127 07:30:08.448385 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" path="/var/lib/kubelet/pods/2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce/volumes" Jan 27 07:30:14 crc kubenswrapper[4764]: I0127 07:30:14.913022 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:14 crc kubenswrapper[4764]: I0127 07:30:14.913847 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:14 crc kubenswrapper[4764]: I0127 07:30:14.949207 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:15 crc kubenswrapper[4764]: I0127 07:30:15.709669 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tx6xn" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.015753 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57"] Jan 27 07:30:17 crc kubenswrapper[4764]: E0127 07:30:17.016107 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" containerName="registry-server" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.016129 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" containerName="registry-server" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.016339 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd1b0ce-95fc-4841-b871-1b1d0d3da7ce" containerName="registry-server" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.017837 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.022637 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jn4kl" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.025149 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57"] Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.068884 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7mml\" (UniqueName: \"kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.068963 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.069137 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.171042 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.171197 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7mml\" (UniqueName: \"kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.171258 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.172178 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.172204 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.195324 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7mml\" (UniqueName: \"kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml\") pod \"b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.342109 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:17 crc kubenswrapper[4764]: I0127 07:30:17.767844 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57"] Jan 27 07:30:18 crc kubenswrapper[4764]: I0127 07:30:18.710101 4764 generic.go:334] "Generic (PLEG): container finished" podID="c3f27bb3-8196-402a-a147-11074280c9d6" containerID="7ec855d01a5129c3c04a64951d76300d5bd23bb545193fa60d43aa829d46e497" exitCode=0 Jan 27 07:30:18 crc kubenswrapper[4764]: I0127 07:30:18.710180 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" event={"ID":"c3f27bb3-8196-402a-a147-11074280c9d6","Type":"ContainerDied","Data":"7ec855d01a5129c3c04a64951d76300d5bd23bb545193fa60d43aa829d46e497"} Jan 27 07:30:18 crc kubenswrapper[4764]: I0127 07:30:18.710224 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" event={"ID":"c3f27bb3-8196-402a-a147-11074280c9d6","Type":"ContainerStarted","Data":"ef51c88cca92dc155334e2d361016cf722fc08352a2387a280b2fa5be08ed7ab"} Jan 27 07:30:20 crc kubenswrapper[4764]: I0127 07:30:20.725680 4764 generic.go:334] "Generic (PLEG): container finished" podID="c3f27bb3-8196-402a-a147-11074280c9d6" containerID="9c54b443268649c4c6a25c1da2f2910b5aea035fcd1f5ba864629090bbc4972b" exitCode=0 Jan 27 07:30:20 crc kubenswrapper[4764]: I0127 07:30:20.725742 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" event={"ID":"c3f27bb3-8196-402a-a147-11074280c9d6","Type":"ContainerDied","Data":"9c54b443268649c4c6a25c1da2f2910b5aea035fcd1f5ba864629090bbc4972b"} Jan 27 07:30:21 crc kubenswrapper[4764]: I0127 07:30:21.738588 4764 generic.go:334] "Generic (PLEG): container finished" podID="c3f27bb3-8196-402a-a147-11074280c9d6" containerID="218db55a6eb78cb254ab12ed40a7de0f69b8479f739a53f131742267b6d0b699" exitCode=0 Jan 27 07:30:21 crc kubenswrapper[4764]: I0127 07:30:21.738666 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" event={"ID":"c3f27bb3-8196-402a-a147-11074280c9d6","Type":"ContainerDied","Data":"218db55a6eb78cb254ab12ed40a7de0f69b8479f739a53f131742267b6d0b699"} Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.009512 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.062056 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle\") pod \"c3f27bb3-8196-402a-a147-11074280c9d6\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.062129 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7mml\" (UniqueName: \"kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml\") pod \"c3f27bb3-8196-402a-a147-11074280c9d6\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.062234 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util\") pod \"c3f27bb3-8196-402a-a147-11074280c9d6\" (UID: \"c3f27bb3-8196-402a-a147-11074280c9d6\") " Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.062795 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle" (OuterVolumeSpecName: "bundle") pod "c3f27bb3-8196-402a-a147-11074280c9d6" (UID: "c3f27bb3-8196-402a-a147-11074280c9d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.068619 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml" (OuterVolumeSpecName: "kube-api-access-m7mml") pod "c3f27bb3-8196-402a-a147-11074280c9d6" (UID: "c3f27bb3-8196-402a-a147-11074280c9d6"). InnerVolumeSpecName "kube-api-access-m7mml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.163843 4764 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.163890 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7mml\" (UniqueName: \"kubernetes.io/projected/c3f27bb3-8196-402a-a147-11074280c9d6-kube-api-access-m7mml\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.384900 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util" (OuterVolumeSpecName: "util") pod "c3f27bb3-8196-402a-a147-11074280c9d6" (UID: "c3f27bb3-8196-402a-a147-11074280c9d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.468068 4764 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3f27bb3-8196-402a-a147-11074280c9d6-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.757774 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" event={"ID":"c3f27bb3-8196-402a-a147-11074280c9d6","Type":"ContainerDied","Data":"ef51c88cca92dc155334e2d361016cf722fc08352a2387a280b2fa5be08ed7ab"} Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.758123 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef51c88cca92dc155334e2d361016cf722fc08352a2387a280b2fa5be08ed7ab" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.757933 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.763812 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.773738 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.773843 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.774756 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:30:23 crc kubenswrapper[4764]: I0127 07:30:23.774849 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e" gracePeriod=600 Jan 27 07:30:24 crc kubenswrapper[4764]: I0127 07:30:24.771921 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e" exitCode=0 Jan 27 07:30:24 crc kubenswrapper[4764]: I0127 07:30:24.772017 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e"} Jan 27 07:30:24 crc kubenswrapper[4764]: I0127 07:30:24.772281 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821"} Jan 27 07:30:24 crc kubenswrapper[4764]: I0127 07:30:24.772308 4764 scope.go:117] "RemoveContainer" containerID="3177bb78cb6789559338ca16c2f36759a9fd88a577260ff5fe5f7b34c66220a8" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.481093 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg"] Jan 27 07:30:29 crc kubenswrapper[4764]: E0127 07:30:29.482117 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="pull" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.482136 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="pull" Jan 27 07:30:29 crc kubenswrapper[4764]: E0127 07:30:29.482160 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="extract" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.482169 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="extract" Jan 27 07:30:29 crc kubenswrapper[4764]: E0127 07:30:29.482181 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="util" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.482189 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="util" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.482327 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f27bb3-8196-402a-a147-11074280c9d6" containerName="extract" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.482958 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.489325 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-t27lq" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.512043 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg"] Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.572654 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rzn\" (UniqueName: \"kubernetes.io/projected/08a061bb-1d9c-4d54-a894-e8352394b3a1-kube-api-access-76rzn\") pod \"openstack-operator-controller-init-6bfcf7b875-t6krg\" (UID: \"08a061bb-1d9c-4d54-a894-e8352394b3a1\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.673810 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rzn\" (UniqueName: \"kubernetes.io/projected/08a061bb-1d9c-4d54-a894-e8352394b3a1-kube-api-access-76rzn\") pod \"openstack-operator-controller-init-6bfcf7b875-t6krg\" (UID: \"08a061bb-1d9c-4d54-a894-e8352394b3a1\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.701503 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rzn\" (UniqueName: \"kubernetes.io/projected/08a061bb-1d9c-4d54-a894-e8352394b3a1-kube-api-access-76rzn\") pod \"openstack-operator-controller-init-6bfcf7b875-t6krg\" (UID: \"08a061bb-1d9c-4d54-a894-e8352394b3a1\") " pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:29 crc kubenswrapper[4764]: I0127 07:30:29.802871 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:30 crc kubenswrapper[4764]: I0127 07:30:30.069893 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg"] Jan 27 07:30:30 crc kubenswrapper[4764]: I0127 07:30:30.825750 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" event={"ID":"08a061bb-1d9c-4d54-a894-e8352394b3a1","Type":"ContainerStarted","Data":"5e978f4c7c8cff7f4a6b5677d0db7b7d1704a748bd91afd30ba0deb19da11112"} Jan 27 07:30:35 crc kubenswrapper[4764]: I0127 07:30:35.867840 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" event={"ID":"08a061bb-1d9c-4d54-a894-e8352394b3a1","Type":"ContainerStarted","Data":"e9390ca7b9a65079336c0d9390dce1649aef4c9ae601b7a81ecfc7eed206c665"} Jan 27 07:30:35 crc kubenswrapper[4764]: I0127 07:30:35.868641 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:30:35 crc kubenswrapper[4764]: I0127 07:30:35.904461 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" podStartSLOduration=1.7034313 podStartE2EDuration="6.904420945s" podCreationTimestamp="2026-01-27 07:30:29 +0000 UTC" firstStartedPulling="2026-01-27 07:30:30.083376877 +0000 UTC m=+842.678999403" lastFinishedPulling="2026-01-27 07:30:35.284366522 +0000 UTC m=+847.879989048" observedRunningTime="2026-01-27 07:30:35.89859445 +0000 UTC m=+848.494217066" watchObservedRunningTime="2026-01-27 07:30:35.904420945 +0000 UTC m=+848.500043471" Jan 27 07:30:49 crc kubenswrapper[4764]: I0127 07:30:49.806373 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bfcf7b875-t6krg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.273224 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.275048 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.277924 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r4ppv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.281666 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.282924 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.284729 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-b7gnv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.286342 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.287223 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.289231 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ttk6f" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.298541 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.306627 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.311133 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.347365 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.348191 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.351052 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2wg9x" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.362493 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.366595 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.367309 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.369080 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptjc\" (UniqueName: \"kubernetes.io/projected/7f9c84bb-2150-49ce-9002-2719d491b2d9-kube-api-access-9ptjc\") pod \"designate-operator-controller-manager-76d4d5b8f9-r76hr\" (UID: \"7f9c84bb-2150-49ce-9002-2719d491b2d9\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.369274 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gphp\" (UniqueName: \"kubernetes.io/projected/14776870-1ec8-423a-a486-ac576b83cb99-kube-api-access-7gphp\") pod \"cinder-operator-controller-manager-5fdc687f5-cxmrb\" (UID: \"14776870-1ec8-423a-a486-ac576b83cb99\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.369490 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqrr\" (UniqueName: \"kubernetes.io/projected/11dbcaab-8ae4-454f-bc9a-5082597154b2-kube-api-access-ssqrr\") pod \"barbican-operator-controller-manager-75b8f798ff-nl98d\" (UID: \"11dbcaab-8ae4-454f-bc9a-5082597154b2\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.369326 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8nxhf" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.378070 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.379142 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.384215 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hfwjk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.392351 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.429978 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.432592 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.440237 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.444536 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kx4xr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.446649 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.447804 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.452512 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pdtkj" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.453247 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.460140 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.461413 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.466805 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fllm9" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.475981 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwwm\" (UniqueName: \"kubernetes.io/projected/6986731a-1bd6-4bfe-a196-7d9be4e9e6f8-kube-api-access-4wwwm\") pod \"heat-operator-controller-manager-658dd65b86-hlkrc\" (UID: \"6986731a-1bd6-4bfe-a196-7d9be4e9e6f8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476040 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476077 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9w9x\" (UniqueName: \"kubernetes.io/projected/1328327d-b57d-4072-86de-039c4642a1f8-kube-api-access-q9w9x\") pod \"glance-operator-controller-manager-84d5bb46b-bh2ct\" (UID: \"1328327d-b57d-4072-86de-039c4642a1f8\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476106 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnn2\" (UniqueName: \"kubernetes.io/projected/68e65a71-8e22-4256-81eb-cd9a58927e5a-kube-api-access-zwnn2\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476146 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssqrr\" (UniqueName: \"kubernetes.io/projected/11dbcaab-8ae4-454f-bc9a-5082597154b2-kube-api-access-ssqrr\") pod \"barbican-operator-controller-manager-75b8f798ff-nl98d\" (UID: \"11dbcaab-8ae4-454f-bc9a-5082597154b2\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476181 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ptjc\" (UniqueName: \"kubernetes.io/projected/7f9c84bb-2150-49ce-9002-2719d491b2d9-kube-api-access-9ptjc\") pod \"designate-operator-controller-manager-76d4d5b8f9-r76hr\" (UID: \"7f9c84bb-2150-49ce-9002-2719d491b2d9\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476216 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghqw\" (UniqueName: \"kubernetes.io/projected/65cc2cb9-2b52-4597-b5b5-0ca087d2f306-kube-api-access-zghqw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nrlkg\" (UID: \"65cc2cb9-2b52-4597-b5b5-0ca087d2f306\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.476262 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gphp\" (UniqueName: \"kubernetes.io/projected/14776870-1ec8-423a-a486-ac576b83cb99-kube-api-access-7gphp\") pod \"cinder-operator-controller-manager-5fdc687f5-cxmrb\" (UID: \"14776870-1ec8-423a-a486-ac576b83cb99\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.515963 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ptjc\" (UniqueName: \"kubernetes.io/projected/7f9c84bb-2150-49ce-9002-2719d491b2d9-kube-api-access-9ptjc\") pod \"designate-operator-controller-manager-76d4d5b8f9-r76hr\" (UID: \"7f9c84bb-2150-49ce-9002-2719d491b2d9\") " pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.517155 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.522490 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gphp\" (UniqueName: \"kubernetes.io/projected/14776870-1ec8-423a-a486-ac576b83cb99-kube-api-access-7gphp\") pod \"cinder-operator-controller-manager-5fdc687f5-cxmrb\" (UID: \"14776870-1ec8-423a-a486-ac576b83cb99\") " pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.525262 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.535188 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.535255 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.536155 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.540538 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.540554 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssqrr\" (UniqueName: \"kubernetes.io/projected/11dbcaab-8ae4-454f-bc9a-5082597154b2-kube-api-access-ssqrr\") pod \"barbican-operator-controller-manager-75b8f798ff-nl98d\" (UID: \"11dbcaab-8ae4-454f-bc9a-5082597154b2\") " pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.541034 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bf6hg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.541670 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.549209 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-v4pw9" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.555418 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.565428 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.565509 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.566528 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.569209 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.570237 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.573953 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8gx5t" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.574211 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mxn8b" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.575283 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.576269 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.580882 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwwm\" (UniqueName: \"kubernetes.io/projected/6986731a-1bd6-4bfe-a196-7d9be4e9e6f8-kube-api-access-4wwwm\") pod \"heat-operator-controller-manager-658dd65b86-hlkrc\" (UID: \"6986731a-1bd6-4bfe-a196-7d9be4e9e6f8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.580934 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.580969 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9w9x\" (UniqueName: \"kubernetes.io/projected/1328327d-b57d-4072-86de-039c4642a1f8-kube-api-access-q9w9x\") pod \"glance-operator-controller-manager-84d5bb46b-bh2ct\" (UID: \"1328327d-b57d-4072-86de-039c4642a1f8\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.580996 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnn2\" (UniqueName: \"kubernetes.io/projected/68e65a71-8e22-4256-81eb-cd9a58927e5a-kube-api-access-zwnn2\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.581031 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lj6\" (UniqueName: \"kubernetes.io/projected/e5c21e1e-f5ae-4f87-8789-2638c0b4dea1-kube-api-access-h6lj6\") pod \"ironic-operator-controller-manager-58865f87b4-wpkhc\" (UID: \"e5c21e1e-f5ae-4f87-8789-2638c0b4dea1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.581069 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghqw\" (UniqueName: \"kubernetes.io/projected/65cc2cb9-2b52-4597-b5b5-0ca087d2f306-kube-api-access-zghqw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nrlkg\" (UID: \"65cc2cb9-2b52-4597-b5b5-0ca087d2f306\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.581102 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpc6n\" (UniqueName: \"kubernetes.io/projected/2b6a69f3-cf3b-465a-917c-78cf3248eb58-kube-api-access-gpc6n\") pod \"keystone-operator-controller-manager-78f8b7b89c-ptrgx\" (UID: \"2b6a69f3-cf3b-465a-917c-78cf3248eb58\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:09 crc kubenswrapper[4764]: E0127 07:31:09.583146 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:09 crc kubenswrapper[4764]: E0127 07:31:09.583236 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:10.083211293 +0000 UTC m=+882.678833879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.595195 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7xgmj" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.606270 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.615927 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.630709 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwwm\" (UniqueName: \"kubernetes.io/projected/6986731a-1bd6-4bfe-a196-7d9be4e9e6f8-kube-api-access-4wwwm\") pod \"heat-operator-controller-manager-658dd65b86-hlkrc\" (UID: \"6986731a-1bd6-4bfe-a196-7d9be4e9e6f8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.633055 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9w9x\" (UniqueName: \"kubernetes.io/projected/1328327d-b57d-4072-86de-039c4642a1f8-kube-api-access-q9w9x\") pod \"glance-operator-controller-manager-84d5bb46b-bh2ct\" (UID: \"1328327d-b57d-4072-86de-039c4642a1f8\") " pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.643576 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghqw\" (UniqueName: \"kubernetes.io/projected/65cc2cb9-2b52-4597-b5b5-0ca087d2f306-kube-api-access-zghqw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nrlkg\" (UID: \"65cc2cb9-2b52-4597-b5b5-0ca087d2f306\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.643842 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.664192 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnn2\" (UniqueName: \"kubernetes.io/projected/68e65a71-8e22-4256-81eb-cd9a58927e5a-kube-api-access-zwnn2\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.672858 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683217 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjc5k\" (UniqueName: \"kubernetes.io/projected/be2b755e-7957-421a-be94-398366a49522-kube-api-access-jjc5k\") pod \"manila-operator-controller-manager-78b8f8fd84-8ffvc\" (UID: \"be2b755e-7957-421a-be94-398366a49522\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683269 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lj6\" (UniqueName: \"kubernetes.io/projected/e5c21e1e-f5ae-4f87-8789-2638c0b4dea1-kube-api-access-h6lj6\") pod \"ironic-operator-controller-manager-58865f87b4-wpkhc\" (UID: \"e5c21e1e-f5ae-4f87-8789-2638c0b4dea1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683334 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpc6n\" (UniqueName: \"kubernetes.io/projected/2b6a69f3-cf3b-465a-917c-78cf3248eb58-kube-api-access-gpc6n\") pod \"keystone-operator-controller-manager-78f8b7b89c-ptrgx\" (UID: \"2b6a69f3-cf3b-465a-917c-78cf3248eb58\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683363 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkf7\" (UniqueName: \"kubernetes.io/projected/93d08886-452d-4408-a924-c9e572c8b2f0-kube-api-access-bgkf7\") pod \"nova-operator-controller-manager-74ffd97575-rhpc8\" (UID: \"93d08886-452d-4408-a924-c9e572c8b2f0\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683387 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jvb\" (UniqueName: \"kubernetes.io/projected/85693d04-9de6-4da3-a527-c6d84ff033b2-kube-api-access-f6jvb\") pod \"neutron-operator-controller-manager-569695f6c5-87prv\" (UID: \"85693d04-9de6-4da3-a527-c6d84ff033b2\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683423 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg8r\" (UniqueName: \"kubernetes.io/projected/89a9b88a-dcc3-462f-a5f2-1311113a92ca-kube-api-access-htg8r\") pod \"octavia-operator-controller-manager-7bf4858b78-xr5lf\" (UID: \"89a9b88a-dcc3-462f-a5f2-1311113a92ca\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.683457 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6cf\" (UniqueName: \"kubernetes.io/projected/6ae3b798-c9a5-48a9-8608-af33f26cb323-kube-api-access-bf6cf\") pod \"mariadb-operator-controller-manager-7b88bfc995-65dhc\" (UID: \"6ae3b798-c9a5-48a9-8608-af33f26cb323\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.694392 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.698673 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.708150 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.713515 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lj6\" (UniqueName: \"kubernetes.io/projected/e5c21e1e-f5ae-4f87-8789-2638c0b4dea1-kube-api-access-h6lj6\") pod \"ironic-operator-controller-manager-58865f87b4-wpkhc\" (UID: \"e5c21e1e-f5ae-4f87-8789-2638c0b4dea1\") " pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.717179 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpc6n\" (UniqueName: \"kubernetes.io/projected/2b6a69f3-cf3b-465a-917c-78cf3248eb58-kube-api-access-gpc6n\") pod \"keystone-operator-controller-manager-78f8b7b89c-ptrgx\" (UID: \"2b6a69f3-cf3b-465a-917c-78cf3248eb58\") " pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.731137 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.734055 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.734891 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.737679 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.738052 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rxz98" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.749570 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.765078 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.773873 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.775114 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.778404 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vd568" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.783888 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.784754 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.784887 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg8r\" (UniqueName: \"kubernetes.io/projected/89a9b88a-dcc3-462f-a5f2-1311113a92ca-kube-api-access-htg8r\") pod \"octavia-operator-controller-manager-7bf4858b78-xr5lf\" (UID: \"89a9b88a-dcc3-462f-a5f2-1311113a92ca\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.784940 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6cf\" (UniqueName: \"kubernetes.io/projected/6ae3b798-c9a5-48a9-8608-af33f26cb323-kube-api-access-bf6cf\") pod \"mariadb-operator-controller-manager-7b88bfc995-65dhc\" (UID: \"6ae3b798-c9a5-48a9-8608-af33f26cb323\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.784991 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjc5k\" (UniqueName: \"kubernetes.io/projected/be2b755e-7957-421a-be94-398366a49522-kube-api-access-jjc5k\") pod \"manila-operator-controller-manager-78b8f8fd84-8ffvc\" (UID: \"be2b755e-7957-421a-be94-398366a49522\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.785026 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.785081 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgg8x\" (UniqueName: \"kubernetes.io/projected/2b3e29bf-af7b-4575-a91b-042b85a244c9-kube-api-access-cgg8x\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.785134 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgkf7\" (UniqueName: \"kubernetes.io/projected/93d08886-452d-4408-a924-c9e572c8b2f0-kube-api-access-bgkf7\") pod \"nova-operator-controller-manager-74ffd97575-rhpc8\" (UID: \"93d08886-452d-4408-a924-c9e572c8b2f0\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.785169 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jvb\" (UniqueName: \"kubernetes.io/projected/85693d04-9de6-4da3-a527-c6d84ff033b2-kube-api-access-f6jvb\") pod \"neutron-operator-controller-manager-569695f6c5-87prv\" (UID: \"85693d04-9de6-4da3-a527-c6d84ff033b2\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.795598 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wl4hp" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.808652 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjc5k\" (UniqueName: \"kubernetes.io/projected/be2b755e-7957-421a-be94-398366a49522-kube-api-access-jjc5k\") pod \"manila-operator-controller-manager-78b8f8fd84-8ffvc\" (UID: \"be2b755e-7957-421a-be94-398366a49522\") " pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.808784 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.810087 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.818147 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg8r\" (UniqueName: \"kubernetes.io/projected/89a9b88a-dcc3-462f-a5f2-1311113a92ca-kube-api-access-htg8r\") pod \"octavia-operator-controller-manager-7bf4858b78-xr5lf\" (UID: \"89a9b88a-dcc3-462f-a5f2-1311113a92ca\") " pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.823160 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgkf7\" (UniqueName: \"kubernetes.io/projected/93d08886-452d-4408-a924-c9e572c8b2f0-kube-api-access-bgkf7\") pod \"nova-operator-controller-manager-74ffd97575-rhpc8\" (UID: \"93d08886-452d-4408-a924-c9e572c8b2f0\") " pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.825000 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jvb\" (UniqueName: \"kubernetes.io/projected/85693d04-9de6-4da3-a527-c6d84ff033b2-kube-api-access-f6jvb\") pod \"neutron-operator-controller-manager-569695f6c5-87prv\" (UID: \"85693d04-9de6-4da3-a527-c6d84ff033b2\") " pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.826213 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6cf\" (UniqueName: \"kubernetes.io/projected/6ae3b798-c9a5-48a9-8608-af33f26cb323-kube-api-access-bf6cf\") pod \"mariadb-operator-controller-manager-7b88bfc995-65dhc\" (UID: \"6ae3b798-c9a5-48a9-8608-af33f26cb323\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.837965 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.839587 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.843974 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6sszz" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.845022 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.847101 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.856872 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.857949 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.867093 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x426p" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.880708 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.881481 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886394 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5bk\" (UniqueName: \"kubernetes.io/projected/c8b21737-d0e6-447a-b230-50e2aed06fd1-kube-api-access-wb5bk\") pod \"telemetry-operator-controller-manager-7db57dc8bf-r9k82\" (UID: \"c8b21737-d0e6-447a-b230-50e2aed06fd1\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886474 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886533 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgg8x\" (UniqueName: \"kubernetes.io/projected/2b3e29bf-af7b-4575-a91b-042b85a244c9-kube-api-access-cgg8x\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886574 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tnt\" (UniqueName: \"kubernetes.io/projected/f72762ca-57ca-4151-aa29-f2a7db4be1f0-kube-api-access-99tnt\") pod \"ovn-operator-controller-manager-bf6d4f946-x6k2g\" (UID: \"f72762ca-57ca-4151-aa29-f2a7db4be1f0\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886657 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzzk\" (UniqueName: \"kubernetes.io/projected/683bcc0e-1607-496b-8d4b-195a1eb2bbaa-kube-api-access-mwzzk\") pod \"placement-operator-controller-manager-7748d79f84-xvlnc\" (UID: \"683bcc0e-1607-496b-8d4b-195a1eb2bbaa\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.886701 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9klpk\" (UniqueName: \"kubernetes.io/projected/6f80a85c-409f-4e68-ab1d-8ee9bb19e544-kube-api-access-9klpk\") pod \"swift-operator-controller-manager-65596dbf77-h5nkr\" (UID: \"6f80a85c-409f-4e68-ab1d-8ee9bb19e544\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:09 crc kubenswrapper[4764]: E0127 07:31:09.886879 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:09 crc kubenswrapper[4764]: E0127 07:31:09.886933 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:10.386912596 +0000 UTC m=+882.982535122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.913327 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.919231 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgg8x\" (UniqueName: \"kubernetes.io/projected/2b3e29bf-af7b-4575-a91b-042b85a244c9-kube-api-access-cgg8x\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.926522 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.927095 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.930262 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.934782 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-f8257" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.987685 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48"] Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.988570 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzzk\" (UniqueName: \"kubernetes.io/projected/683bcc0e-1607-496b-8d4b-195a1eb2bbaa-kube-api-access-mwzzk\") pod \"placement-operator-controller-manager-7748d79f84-xvlnc\" (UID: \"683bcc0e-1607-496b-8d4b-195a1eb2bbaa\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.988599 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9klpk\" (UniqueName: \"kubernetes.io/projected/6f80a85c-409f-4e68-ab1d-8ee9bb19e544-kube-api-access-9klpk\") pod \"swift-operator-controller-manager-65596dbf77-h5nkr\" (UID: \"6f80a85c-409f-4e68-ab1d-8ee9bb19e544\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.988640 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5bk\" (UniqueName: \"kubernetes.io/projected/c8b21737-d0e6-447a-b230-50e2aed06fd1-kube-api-access-wb5bk\") pod \"telemetry-operator-controller-manager-7db57dc8bf-r9k82\" (UID: \"c8b21737-d0e6-447a-b230-50e2aed06fd1\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.988660 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnts\" (UniqueName: \"kubernetes.io/projected/87551348-318f-45f9-bea6-07750f5c0b7b-kube-api-access-2gnts\") pod \"test-operator-controller-manager-6c866cfdcb-lcd48\" (UID: \"87551348-318f-45f9-bea6-07750f5c0b7b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:09 crc kubenswrapper[4764]: I0127 07:31:09.988719 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tnt\" (UniqueName: \"kubernetes.io/projected/f72762ca-57ca-4151-aa29-f2a7db4be1f0-kube-api-access-99tnt\") pod \"ovn-operator-controller-manager-bf6d4f946-x6k2g\" (UID: \"f72762ca-57ca-4151-aa29-f2a7db4be1f0\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.029676 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.032245 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.034201 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.041068 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tnt\" (UniqueName: \"kubernetes.io/projected/f72762ca-57ca-4151-aa29-f2a7db4be1f0-kube-api-access-99tnt\") pod \"ovn-operator-controller-manager-bf6d4f946-x6k2g\" (UID: \"f72762ca-57ca-4151-aa29-f2a7db4be1f0\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.042543 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.042723 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hwssm" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.044694 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5bk\" (UniqueName: \"kubernetes.io/projected/c8b21737-d0e6-447a-b230-50e2aed06fd1-kube-api-access-wb5bk\") pod \"telemetry-operator-controller-manager-7db57dc8bf-r9k82\" (UID: \"c8b21737-d0e6-447a-b230-50e2aed06fd1\") " pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.045316 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzzk\" (UniqueName: \"kubernetes.io/projected/683bcc0e-1607-496b-8d4b-195a1eb2bbaa-kube-api-access-mwzzk\") pod \"placement-operator-controller-manager-7748d79f84-xvlnc\" (UID: \"683bcc0e-1607-496b-8d4b-195a1eb2bbaa\") " pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.049621 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.055065 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9klpk\" (UniqueName: \"kubernetes.io/projected/6f80a85c-409f-4e68-ab1d-8ee9bb19e544-kube-api-access-9klpk\") pod \"swift-operator-controller-manager-65596dbf77-h5nkr\" (UID: \"6f80a85c-409f-4e68-ab1d-8ee9bb19e544\") " pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.061218 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.089528 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.089591 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnts\" (UniqueName: \"kubernetes.io/projected/87551348-318f-45f9-bea6-07750f5c0b7b-kube-api-access-2gnts\") pod \"test-operator-controller-manager-6c866cfdcb-lcd48\" (UID: \"87551348-318f-45f9-bea6-07750f5c0b7b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.089631 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq74g\" (UniqueName: \"kubernetes.io/projected/01275567-9bc4-4728-98af-c399b3b386f3-kube-api-access-fq74g\") pod \"watcher-operator-controller-manager-6476466c7c-2hntl\" (UID: \"01275567-9bc4-4728-98af-c399b3b386f3\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.089792 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.089830 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:11.08981636 +0000 UTC m=+883.685438886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.091201 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.092314 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.098722 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kdw8c" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.098925 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.101866 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.117738 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnts\" (UniqueName: \"kubernetes.io/projected/87551348-318f-45f9-bea6-07750f5c0b7b-kube-api-access-2gnts\") pod \"test-operator-controller-manager-6c866cfdcb-lcd48\" (UID: \"87551348-318f-45f9-bea6-07750f5c0b7b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.120029 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.133654 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.137821 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.139064 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.166585 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.172311 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9cn54" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.172958 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.191586 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.191690 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.191767 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764tv\" (UniqueName: \"kubernetes.io/projected/45b5dc0d-656e-4475-9414-ac8f1e6ae767-kube-api-access-764tv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6b8j8\" (UID: \"45b5dc0d-656e-4475-9414-ac8f1e6ae767\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.191815 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq74g\" (UniqueName: \"kubernetes.io/projected/01275567-9bc4-4728-98af-c399b3b386f3-kube-api-access-fq74g\") pod \"watcher-operator-controller-manager-6476466c7c-2hntl\" (UID: \"01275567-9bc4-4728-98af-c399b3b386f3\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.191855 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfg2t\" (UniqueName: \"kubernetes.io/projected/ce339e1f-d181-42e1-bb9a-d6401699560f-kube-api-access-dfg2t\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.196023 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.221109 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq74g\" (UniqueName: \"kubernetes.io/projected/01275567-9bc4-4728-98af-c399b3b386f3-kube-api-access-fq74g\") pod \"watcher-operator-controller-manager-6476466c7c-2hntl\" (UID: \"01275567-9bc4-4728-98af-c399b3b386f3\") " pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.223669 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.300836 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.300947 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764tv\" (UniqueName: \"kubernetes.io/projected/45b5dc0d-656e-4475-9414-ac8f1e6ae767-kube-api-access-764tv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6b8j8\" (UID: \"45b5dc0d-656e-4475-9414-ac8f1e6ae767\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.301019 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfg2t\" (UniqueName: \"kubernetes.io/projected/ce339e1f-d181-42e1-bb9a-d6401699560f-kube-api-access-dfg2t\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.301055 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.301247 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.301351 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:10.801322692 +0000 UTC m=+883.396945218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.301269 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.301402 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:10.801393944 +0000 UTC m=+883.397016470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.330838 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfg2t\" (UniqueName: \"kubernetes.io/projected/ce339e1f-d181-42e1-bb9a-d6401699560f-kube-api-access-dfg2t\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.334909 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764tv\" (UniqueName: \"kubernetes.io/projected/45b5dc0d-656e-4475-9414-ac8f1e6ae767-kube-api-access-764tv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6b8j8\" (UID: \"45b5dc0d-656e-4475-9414-ac8f1e6ae767\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.360013 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.362743 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.367269 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.376037 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.403022 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.403312 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.403386 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:11.403365854 +0000 UTC m=+883.998988380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: W0127 07:31:10.423635 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9c84bb_2150_49ce_9002_2719d491b2d9.slice/crio-0f3e97c314cbe73658527c0933d66ea89366e6608ce6d00b7f4858260bc027f8 WatchSource:0}: Error finding container 0f3e97c314cbe73658527c0933d66ea89366e6608ce6d00b7f4858260bc027f8: Status 404 returned error can't find the container with id 0f3e97c314cbe73658527c0933d66ea89366e6608ce6d00b7f4858260bc027f8 Jan 27 07:31:10 crc kubenswrapper[4764]: W0127 07:31:10.488508 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11dbcaab_8ae4_454f_bc9a_5082597154b2.slice/crio-372645e68eb486d73c52b4e5991801a62359b661ce4ad35777c60a2a55aa54d9 WatchSource:0}: Error finding container 372645e68eb486d73c52b4e5991801a62359b661ce4ad35777c60a2a55aa54d9: Status 404 returned error can't find the container with id 372645e68eb486d73c52b4e5991801a62359b661ce4ad35777c60a2a55aa54d9 Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.533502 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.758671 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.772559 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.823565 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc"] Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.837821 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: I0127 07:31:10.837945 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.838230 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.838322 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:11.838293106 +0000 UTC m=+884.433915632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.838399 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:10 crc kubenswrapper[4764]: E0127 07:31:10.838430 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:11.838421259 +0000 UTC m=+884.434043785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.143645 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.143779 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.143826 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:13.143810907 +0000 UTC m=+885.739433433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.191544 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" event={"ID":"6986731a-1bd6-4bfe-a196-7d9be4e9e6f8","Type":"ContainerStarted","Data":"d48a7016b2b225b8972fdb4824d602e8bb55afe3af7a625cfe61e63f5439b5e9"} Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.199482 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" event={"ID":"1328327d-b57d-4072-86de-039c4642a1f8","Type":"ContainerStarted","Data":"a1ddcebf51315bf69d1950b0b64fd859bc9e4bbabf636e669f4fb01dabd1e4bd"} Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.201369 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" event={"ID":"11dbcaab-8ae4-454f-bc9a-5082597154b2","Type":"ContainerStarted","Data":"372645e68eb486d73c52b4e5991801a62359b661ce4ad35777c60a2a55aa54d9"} Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.202583 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" event={"ID":"7f9c84bb-2150-49ce-9002-2719d491b2d9","Type":"ContainerStarted","Data":"0f3e97c314cbe73658527c0933d66ea89366e6608ce6d00b7f4858260bc027f8"} Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.210176 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" event={"ID":"14776870-1ec8-423a-a486-ac576b83cb99","Type":"ContainerStarted","Data":"f0c4c834b7ef05b67fa948070ab317b7ea6c7178f046999dbc92744161c026ed"} Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.442193 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.450541 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.450714 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.450771 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:13.450755487 +0000 UTC m=+886.046378013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: W0127 07:31:11.456712 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae3b798_c9a5_48a9_8608_af33f26cb323.slice/crio-e29e1b120d1a849d78051277a8727d7dd8e4d3a8e2cc7a2c0611dc19d065057f WatchSource:0}: Error finding container e29e1b120d1a849d78051277a8727d7dd8e4d3a8e2cc7a2c0611dc19d065057f: Status 404 returned error can't find the container with id e29e1b120d1a849d78051277a8727d7dd8e4d3a8e2cc7a2c0611dc19d065057f Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.492407 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.499556 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg"] Jan 27 07:31:11 crc kubenswrapper[4764]: W0127 07:31:11.528829 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6a69f3_cf3b_465a_917c_78cf3248eb58.slice/crio-d65dadfeb87a174218acfc293244f5224f75534a4cce34061cc65430a8938601 WatchSource:0}: Error finding container d65dadfeb87a174218acfc293244f5224f75534a4cce34061cc65430a8938601: Status 404 returned error can't find the container with id d65dadfeb87a174218acfc293244f5224f75534a4cce34061cc65430a8938601 Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.539491 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc"] Jan 27 07:31:11 crc kubenswrapper[4764]: W0127 07:31:11.544768 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5c21e1e_f5ae_4f87_8789_2638c0b4dea1.slice/crio-f30139eda531bf51b2facc6d9b3a9d5c65e0b8d1762bae616d4c212ce67da327 WatchSource:0}: Error finding container f30139eda531bf51b2facc6d9b3a9d5c65e0b8d1762bae616d4c212ce67da327: Status 404 returned error can't find the container with id f30139eda531bf51b2facc6d9b3a9d5c65e0b8d1762bae616d4c212ce67da327 Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.820689 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.826837 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.839193 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.855052 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.858636 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.858767 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.858866 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.858889 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.858940 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:13.858924537 +0000 UTC m=+886.454547063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.858957 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:13.858951458 +0000 UTC m=+886.454573984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.862818 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.872233 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.884560 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g"] Jan 27 07:31:11 crc kubenswrapper[4764]: W0127 07:31:11.887555 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b5dc0d_656e_4475_9414_ac8f1e6ae767.slice/crio-48f047d322fdf650f59b6372708a64b96e3e941db3b187ad186f3c8205b6ecf5 WatchSource:0}: Error finding container 48f047d322fdf650f59b6372708a64b96e3e941db3b187ad186f3c8205b6ecf5: Status 404 returned error can't find the container with id 48f047d322fdf650f59b6372708a64b96e3e941db3b187ad186f3c8205b6ecf5 Jan 27 07:31:11 crc kubenswrapper[4764]: W0127 07:31:11.887922 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87551348_318f_45f9_bea6_07750f5c0b7b.slice/crio-dab37007c698c48946c68c8cb54a66c858db72812471141c71d4159bddd23d7c WatchSource:0}: Error finding container dab37007c698c48946c68c8cb54a66c858db72812471141c71d4159bddd23d7c: Status 404 returned error can't find the container with id dab37007c698c48946c68c8cb54a66c858db72812471141c71d4159bddd23d7c Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.890590 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.895558 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.901245 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc"] Jan 27 07:31:11 crc kubenswrapper[4764]: I0127 07:31:11.907732 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl"] Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.921188 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gnts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-lcd48_openstack-operators(87551348-318f-45f9-bea6-07750f5c0b7b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.922346 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" podUID="87551348-318f-45f9-bea6-07750f5c0b7b" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.930359 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/octavia-operator@sha256:c71c081c53239338b69dc68bde59707ecafa147c81489fd755b82a9f1af402bd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htg8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bf4858b78-xr5lf_openstack-operators(89a9b88a-dcc3-462f-a5f2-1311113a92ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.931591 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" podUID="89a9b88a-dcc3-462f-a5f2-1311113a92ca" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.932000 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwzzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-7748d79f84-xvlnc_openstack-operators(683bcc0e-1607-496b-8d4b-195a1eb2bbaa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:31:11 crc kubenswrapper[4764]: E0127 07:31:11.934034 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" podUID="683bcc0e-1607-496b-8d4b-195a1eb2bbaa" Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.257390 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" event={"ID":"c8b21737-d0e6-447a-b230-50e2aed06fd1","Type":"ContainerStarted","Data":"0168cf7068d8fc206a5e5630b7e750a71832a90ce668ae0cc7e9572f50b2f5f6"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.259591 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" event={"ID":"85693d04-9de6-4da3-a527-c6d84ff033b2","Type":"ContainerStarted","Data":"bcd22aba625e831cdb6e39f9273c7b71d0b8585130d8c2767c6076da3cc97862"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.261697 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" event={"ID":"89a9b88a-dcc3-462f-a5f2-1311113a92ca","Type":"ContainerStarted","Data":"465074e756b0aab12341da5b15bb3d0af61e95fbd64a8b9aa24de9900dd8f899"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.266936 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" event={"ID":"683bcc0e-1607-496b-8d4b-195a1eb2bbaa","Type":"ContainerStarted","Data":"5fb40bc5f2971ea9fb4a835997b86109c9260e065886ffe4ee968eee68cd547d"} Jan 27 07:31:12 crc kubenswrapper[4764]: E0127 07:31:12.266997 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/octavia-operator@sha256:c71c081c53239338b69dc68bde59707ecafa147c81489fd755b82a9f1af402bd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" podUID="89a9b88a-dcc3-462f-a5f2-1311113a92ca" Jan 27 07:31:12 crc kubenswrapper[4764]: E0127 07:31:12.269077 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" podUID="683bcc0e-1607-496b-8d4b-195a1eb2bbaa" Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.270283 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" event={"ID":"be2b755e-7957-421a-be94-398366a49522","Type":"ContainerStarted","Data":"085a14db1b1abb258c7a81bb59c15ac5a0804779a81da348ebc80b1851caebf6"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.273394 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" event={"ID":"2b6a69f3-cf3b-465a-917c-78cf3248eb58","Type":"ContainerStarted","Data":"d65dadfeb87a174218acfc293244f5224f75534a4cce34061cc65430a8938601"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.277130 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" event={"ID":"f72762ca-57ca-4151-aa29-f2a7db4be1f0","Type":"ContainerStarted","Data":"d5b63063e37d014424887a755dd2c778c7ed8896b7ae99fdf3a852088f648f79"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.278869 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" event={"ID":"87551348-318f-45f9-bea6-07750f5c0b7b","Type":"ContainerStarted","Data":"dab37007c698c48946c68c8cb54a66c858db72812471141c71d4159bddd23d7c"} Jan 27 07:31:12 crc kubenswrapper[4764]: E0127 07:31:12.280113 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" podUID="87551348-318f-45f9-bea6-07750f5c0b7b" Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.295329 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" event={"ID":"45b5dc0d-656e-4475-9414-ac8f1e6ae767","Type":"ContainerStarted","Data":"48f047d322fdf650f59b6372708a64b96e3e941db3b187ad186f3c8205b6ecf5"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.299477 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" event={"ID":"e5c21e1e-f5ae-4f87-8789-2638c0b4dea1","Type":"ContainerStarted","Data":"f30139eda531bf51b2facc6d9b3a9d5c65e0b8d1762bae616d4c212ce67da327"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.303511 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" event={"ID":"6f80a85c-409f-4e68-ab1d-8ee9bb19e544","Type":"ContainerStarted","Data":"d3a67a25479c6c90f1e45949d2ef87c5942e9a3e434df3335a085ecde651a4f9"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.305981 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" event={"ID":"93d08886-452d-4408-a924-c9e572c8b2f0","Type":"ContainerStarted","Data":"0904b1aa5922ff99b3ce516948e1bc63dbf5a94db59d0e0536ac3ded3696d4c6"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.308893 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" event={"ID":"01275567-9bc4-4728-98af-c399b3b386f3","Type":"ContainerStarted","Data":"05b21c15f2bfd6c01a0d1ace69f4659ca5108edebefdc7f227b0bd976876e156"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.309904 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" event={"ID":"6ae3b798-c9a5-48a9-8608-af33f26cb323","Type":"ContainerStarted","Data":"e29e1b120d1a849d78051277a8727d7dd8e4d3a8e2cc7a2c0611dc19d065057f"} Jan 27 07:31:12 crc kubenswrapper[4764]: I0127 07:31:12.310977 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" event={"ID":"65cc2cb9-2b52-4597-b5b5-0ca087d2f306","Type":"ContainerStarted","Data":"89d8f10b0a30d0f0f6761268df8ae32ebba2c43e97c454890628e689bf74d181"} Jan 27 07:31:13 crc kubenswrapper[4764]: I0127 07:31:13.203370 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.203540 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.203594 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:17.203578502 +0000 UTC m=+889.799201028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.322040 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" podUID="87551348-318f-45f9-bea6-07750f5c0b7b" Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.322139 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/placement-operator@sha256:a40693d0a2ee7b50ff5b2bd339bc0ce358ccc16309e803e40d8b26e189a2b4c0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" podUID="683bcc0e-1607-496b-8d4b-195a1eb2bbaa" Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.322291 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/octavia-operator@sha256:c71c081c53239338b69dc68bde59707ecafa147c81489fd755b82a9f1af402bd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" podUID="89a9b88a-dcc3-462f-a5f2-1311113a92ca" Jan 27 07:31:13 crc kubenswrapper[4764]: I0127 07:31:13.506344 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.507013 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.507058 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:17.507043459 +0000 UTC m=+890.102665975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: I0127 07:31:13.917283 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:13 crc kubenswrapper[4764]: I0127 07:31:13.917684 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.917574 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.917847 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:17.917816897 +0000 UTC m=+890.513439423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.917877 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:13 crc kubenswrapper[4764]: E0127 07:31:13.917940 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:17.91792013 +0000 UTC m=+890.513542656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:17 crc kubenswrapper[4764]: I0127 07:31:17.300822 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:17 crc kubenswrapper[4764]: E0127 07:31:17.301248 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:17 crc kubenswrapper[4764]: E0127 07:31:17.301295 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:25.301281939 +0000 UTC m=+897.896904465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:17 crc kubenswrapper[4764]: I0127 07:31:17.604618 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:17 crc kubenswrapper[4764]: E0127 07:31:17.604840 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:17 crc kubenswrapper[4764]: E0127 07:31:17.604924 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:25.604905819 +0000 UTC m=+898.200528345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:18 crc kubenswrapper[4764]: I0127 07:31:18.010757 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:18 crc kubenswrapper[4764]: I0127 07:31:18.010832 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:18 crc kubenswrapper[4764]: E0127 07:31:18.011011 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:18 crc kubenswrapper[4764]: E0127 07:31:18.011065 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:26.011048236 +0000 UTC m=+898.606670762 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:18 crc kubenswrapper[4764]: E0127 07:31:18.011379 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:18 crc kubenswrapper[4764]: E0127 07:31:18.011482 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:26.011460957 +0000 UTC m=+898.607083483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:25 crc kubenswrapper[4764]: I0127 07:31:25.341324 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:25 crc kubenswrapper[4764]: E0127 07:31:25.341507 4764 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:25 crc kubenswrapper[4764]: E0127 07:31:25.342109 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert podName:68e65a71-8e22-4256-81eb-cd9a58927e5a nodeName:}" failed. No retries permitted until 2026-01-27 07:31:41.342089553 +0000 UTC m=+913.937712079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert") pod "infra-operator-controller-manager-54ccf4f85d-27ncl" (UID: "68e65a71-8e22-4256-81eb-cd9a58927e5a") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:31:25 crc kubenswrapper[4764]: I0127 07:31:25.646853 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:25 crc kubenswrapper[4764]: E0127 07:31:25.647061 4764 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:25 crc kubenswrapper[4764]: E0127 07:31:25.647154 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert podName:2b3e29bf-af7b-4575-a91b-042b85a244c9 nodeName:}" failed. No retries permitted until 2026-01-27 07:31:41.647130532 +0000 UTC m=+914.242753058 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert") pod "openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" (UID: "2b3e29bf-af7b-4575-a91b-042b85a244c9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:31:26 crc kubenswrapper[4764]: I0127 07:31:26.053883 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:26 crc kubenswrapper[4764]: I0127 07:31:26.053970 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:26 crc kubenswrapper[4764]: E0127 07:31:26.054061 4764 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:31:26 crc kubenswrapper[4764]: E0127 07:31:26.054141 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:42.054120781 +0000 UTC m=+914.649743307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "metrics-server-cert" not found Jan 27 07:31:26 crc kubenswrapper[4764]: E0127 07:31:26.054215 4764 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:31:26 crc kubenswrapper[4764]: E0127 07:31:26.054301 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs podName:ce339e1f-d181-42e1-bb9a-d6401699560f nodeName:}" failed. No retries permitted until 2026-01-27 07:31:42.054277175 +0000 UTC m=+914.649899701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs") pod "openstack-operator-controller-manager-76958f4d87-pt4l2" (UID: "ce339e1f-d181-42e1-bb9a-d6401699560f") : secret "webhook-server-cert" not found Jan 27 07:31:29 crc kubenswrapper[4764]: E0127 07:31:29.707629 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 27 07:31:29 crc kubenswrapper[4764]: E0127 07:31:29.708305 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99tnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-x6k2g_openstack-operators(f72762ca-57ca-4151-aa29-f2a7db4be1f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:29 crc kubenswrapper[4764]: E0127 07:31:29.709533 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" podUID="f72762ca-57ca-4151-aa29-f2a7db4be1f0" Jan 27 07:31:30 crc kubenswrapper[4764]: E0127 07:31:30.493387 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" podUID="f72762ca-57ca-4151-aa29-f2a7db4be1f0" Jan 27 07:31:30 crc kubenswrapper[4764]: E0127 07:31:30.682127 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/barbican-operator@sha256:629a757905fe676f15ebab2186532d8af43fb17ff289dad5df34fddfd54a4731" Jan 27 07:31:30 crc kubenswrapper[4764]: E0127 07:31:30.682336 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/barbican-operator@sha256:629a757905fe676f15ebab2186532d8af43fb17ff289dad5df34fddfd54a4731,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssqrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-75b8f798ff-nl98d_openstack-operators(11dbcaab-8ae4-454f-bc9a-5082597154b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:30 crc kubenswrapper[4764]: E0127 07:31:30.683773 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" podUID="11dbcaab-8ae4-454f-bc9a-5082597154b2" Jan 27 07:31:31 crc kubenswrapper[4764]: E0127 07:31:31.497703 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/barbican-operator@sha256:629a757905fe676f15ebab2186532d8af43fb17ff289dad5df34fddfd54a4731\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" podUID="11dbcaab-8ae4-454f-bc9a-5082597154b2" Jan 27 07:31:32 crc kubenswrapper[4764]: E0127 07:31:32.630760 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29" Jan 27 07:31:32 crc kubenswrapper[4764]: E0127 07:31:32.630965 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9klpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-65596dbf77-h5nkr_openstack-operators(6f80a85c-409f-4e68-ab1d-8ee9bb19e544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:32 crc kubenswrapper[4764]: E0127 07:31:32.632196 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" podUID="6f80a85c-409f-4e68-ab1d-8ee9bb19e544" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.251762 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/ironic-operator@sha256:d7e1674896885701c5fd0a234d8fccb00d90066e46de4901642413f4b221c7ae" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.251963 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/ironic-operator@sha256:d7e1674896885701c5fd0a234d8fccb00d90066e46de4901642413f4b221c7ae,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6lj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-58865f87b4-wpkhc_openstack-operators(e5c21e1e-f5ae-4f87-8789-2638c0b4dea1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.254168 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" podUID="e5c21e1e-f5ae-4f87-8789-2638c0b4dea1" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.508117 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/ironic-operator@sha256:d7e1674896885701c5fd0a234d8fccb00d90066e46de4901642413f4b221c7ae\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" podUID="e5c21e1e-f5ae-4f87-8789-2638c0b4dea1" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.508339 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/swift-operator@sha256:018ae1352a061ad22a0d4ac5764eb7e19cf5a1d6c2e554f61ae0bd82ebe62e29\\\"\"" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" podUID="6f80a85c-409f-4e68-ab1d-8ee9bb19e544" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.818148 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.818343 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq74g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6476466c7c-2hntl_openstack-operators(01275567-9bc4-4728-98af-c399b3b386f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:33 crc kubenswrapper[4764]: E0127 07:31:33.819506 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" podUID="01275567-9bc4-4728-98af-c399b3b386f3" Jan 27 07:31:34 crc kubenswrapper[4764]: E0127 07:31:34.409889 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41" Jan 27 07:31:34 crc kubenswrapper[4764]: E0127 07:31:34.410086 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bf6cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-65dhc_openstack-operators(6ae3b798-c9a5-48a9-8608-af33f26cb323): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:34 crc kubenswrapper[4764]: E0127 07:31:34.411178 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" podUID="6ae3b798-c9a5-48a9-8608-af33f26cb323" Jan 27 07:31:34 crc kubenswrapper[4764]: E0127 07:31:34.547620 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/watcher-operator@sha256:611e4fb8bf6cd263664ccb437637105fba633ba8f701c228fd525a7a7b3c8d74\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" podUID="01275567-9bc4-4728-98af-c399b3b386f3" Jan 27 07:31:34 crc kubenswrapper[4764]: E0127 07:31:34.547674 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" podUID="6ae3b798-c9a5-48a9-8608-af33f26cb323" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.089473 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/manila-operator@sha256:a81133a26aeb26d2ef1a73d063733e595349b2e94969abcb8bc100f8668ee702" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.089696 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/manila-operator@sha256:a81133a26aeb26d2ef1a73d063733e595349b2e94969abcb8bc100f8668ee702,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjc5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78b8f8fd84-8ffvc_openstack-operators(be2b755e-7957-421a-be94-398366a49522): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.091413 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" podUID="be2b755e-7957-421a-be94-398366a49522" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.552680 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/manila-operator@sha256:a81133a26aeb26d2ef1a73d063733e595349b2e94969abcb8bc100f8668ee702\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" podUID="be2b755e-7957-421a-be94-398366a49522" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.636999 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.637185 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpc6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-78f8b7b89c-ptrgx_openstack-operators(2b6a69f3-cf3b-465a-917c-78cf3248eb58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.638774 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" podUID="2b6a69f3-cf3b-465a-917c-78cf3248eb58" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.986883 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.987079 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-764tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6b8j8_openstack-operators(45b5dc0d-656e-4475-9414-ac8f1e6ae767): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:31:35 crc kubenswrapper[4764]: E0127 07:31:35.988341 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" podUID="45b5dc0d-656e-4475-9414-ac8f1e6ae767" Jan 27 07:31:36 crc kubenswrapper[4764]: E0127 07:31:36.559266 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/rh-ee-vfisarov/keystone-operator@sha256:3f07fd90b18820601ae78f45a9fbef53bf9e3ed131d5cfa1d424ae0145862dd6\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" podUID="2b6a69f3-cf3b-465a-917c-78cf3248eb58" Jan 27 07:31:36 crc kubenswrapper[4764]: E0127 07:31:36.559361 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" podUID="45b5dc0d-656e-4475-9414-ac8f1e6ae767" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.569950 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" event={"ID":"14776870-1ec8-423a-a486-ac576b83cb99","Type":"ContainerStarted","Data":"c257e9cbc6d5e0b2194a8427d7287fac53df78089a42b52d5802ece75c05c9e3"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.570666 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.572064 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" event={"ID":"93d08886-452d-4408-a924-c9e572c8b2f0","Type":"ContainerStarted","Data":"35349a93701d2a4d3a52e162b207933cbf96ea03fb925665fdf24ea3c727ad4b"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.572535 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.574215 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" event={"ID":"683bcc0e-1607-496b-8d4b-195a1eb2bbaa","Type":"ContainerStarted","Data":"c073bc122f6d6a4accf7045b2a708b6d8574dd9a45da534f0592387170eb292a"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.574709 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.575928 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" event={"ID":"7f9c84bb-2150-49ce-9002-2719d491b2d9","Type":"ContainerStarted","Data":"83e992666ec6d76d83161715608556d7d6ea72df1cc025d48b62e989fd987541"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.576322 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.577489 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" event={"ID":"1328327d-b57d-4072-86de-039c4642a1f8","Type":"ContainerStarted","Data":"2c33b272af3d7c407b0071b30f21631e04a4927829735c226d6af006a43d4916"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.577873 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.579265 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" event={"ID":"85693d04-9de6-4da3-a527-c6d84ff033b2","Type":"ContainerStarted","Data":"f27f2fd98fed5ac14d1020248cc61b19e6fc95f3f8ff739b4ee62a6369f66658"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.579709 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.581046 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" event={"ID":"89a9b88a-dcc3-462f-a5f2-1311113a92ca","Type":"ContainerStarted","Data":"3e6af0cba2977b7f680de1f4e4d0832ca9fb5b66a802b1e5009eed7f0df40f58"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.581466 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.582985 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" event={"ID":"65cc2cb9-2b52-4597-b5b5-0ca087d2f306","Type":"ContainerStarted","Data":"303b18fa7bd7133981a91c0575260e43164c9736b671e2f233e947831dd34961"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.583481 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.585036 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" event={"ID":"87551348-318f-45f9-bea6-07750f5c0b7b","Type":"ContainerStarted","Data":"d09de49c511b6eb847c5204c397d518d148b8e16c4bb8298eaa32c2485b3ae24"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.585552 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.586970 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" event={"ID":"6986731a-1bd6-4bfe-a196-7d9be4e9e6f8","Type":"ContainerStarted","Data":"a912a3dd8723a254325474bb58a1c5e90afa1cf10d560271abf479e06c621054"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.587422 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.588746 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" event={"ID":"c8b21737-d0e6-447a-b230-50e2aed06fd1","Type":"ContainerStarted","Data":"aca1ee9fca094ffabe4e9a76e5616f68d391888f665c684bccc2f48c61efbaa7"} Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.589143 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.757853 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" podStartSLOduration=3.035942762 podStartE2EDuration="29.757828711s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:10.928477273 +0000 UTC m=+883.524099799" lastFinishedPulling="2026-01-27 07:31:37.650363222 +0000 UTC m=+910.245985748" observedRunningTime="2026-01-27 07:31:38.746121961 +0000 UTC m=+911.341744497" watchObservedRunningTime="2026-01-27 07:31:38.757828711 +0000 UTC m=+911.353451257" Jan 27 07:31:38 crc kubenswrapper[4764]: I0127 07:31:38.957460 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" podStartSLOduration=5.497230743 podStartE2EDuration="29.957420987s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.512326773 +0000 UTC m=+884.107949299" lastFinishedPulling="2026-01-27 07:31:35.972517017 +0000 UTC m=+908.568139543" observedRunningTime="2026-01-27 07:31:38.95302766 +0000 UTC m=+911.548650196" watchObservedRunningTime="2026-01-27 07:31:38.957420987 +0000 UTC m=+911.553043513" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.024951 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" podStartSLOduration=3.392268887 podStartE2EDuration="30.024929044s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.017683534 +0000 UTC m=+883.613306060" lastFinishedPulling="2026-01-27 07:31:37.650343691 +0000 UTC m=+910.245966217" observedRunningTime="2026-01-27 07:31:39.024364559 +0000 UTC m=+911.619987085" watchObservedRunningTime="2026-01-27 07:31:39.024929044 +0000 UTC m=+911.620551570" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.072202 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" podStartSLOduration=4.547378871 podStartE2EDuration="30.072186775s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:10.44759219 +0000 UTC m=+883.043214716" lastFinishedPulling="2026-01-27 07:31:35.972400094 +0000 UTC m=+908.568022620" observedRunningTime="2026-01-27 07:31:39.070832119 +0000 UTC m=+911.666454645" watchObservedRunningTime="2026-01-27 07:31:39.072186775 +0000 UTC m=+911.667809301" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.112475 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" podStartSLOduration=5.157937038 podStartE2EDuration="30.112454092s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.017673274 +0000 UTC m=+883.613295790" lastFinishedPulling="2026-01-27 07:31:35.972190318 +0000 UTC m=+908.567812844" observedRunningTime="2026-01-27 07:31:39.108277521 +0000 UTC m=+911.703900067" watchObservedRunningTime="2026-01-27 07:31:39.112454092 +0000 UTC m=+911.708076618" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.130258 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" podStartSLOduration=4.35601893 podStartE2EDuration="30.130237732s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.930211892 +0000 UTC m=+884.525834408" lastFinishedPulling="2026-01-27 07:31:37.704430684 +0000 UTC m=+910.300053210" observedRunningTime="2026-01-27 07:31:39.122795395 +0000 UTC m=+911.718417921" watchObservedRunningTime="2026-01-27 07:31:39.130237732 +0000 UTC m=+911.725860248" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.152162 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" podStartSLOduration=4.269453898 podStartE2EDuration="30.152144593s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.921059379 +0000 UTC m=+884.516681905" lastFinishedPulling="2026-01-27 07:31:37.803750074 +0000 UTC m=+910.399372600" observedRunningTime="2026-01-27 07:31:39.146022351 +0000 UTC m=+911.741644877" watchObservedRunningTime="2026-01-27 07:31:39.152144593 +0000 UTC m=+911.747767119" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.199066 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" podStartSLOduration=4.426949989 podStartE2EDuration="30.199049065s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.920654168 +0000 UTC m=+884.516276694" lastFinishedPulling="2026-01-27 07:31:37.692753244 +0000 UTC m=+910.288375770" observedRunningTime="2026-01-27 07:31:39.169578174 +0000 UTC m=+911.765200700" watchObservedRunningTime="2026-01-27 07:31:39.199049065 +0000 UTC m=+911.794671601" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.200089 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" podStartSLOduration=4.417094702 podStartE2EDuration="30.200082082s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.867355221 +0000 UTC m=+884.462977747" lastFinishedPulling="2026-01-27 07:31:37.650342601 +0000 UTC m=+910.245965127" observedRunningTime="2026-01-27 07:31:39.194781302 +0000 UTC m=+911.790403818" watchObservedRunningTime="2026-01-27 07:31:39.200082082 +0000 UTC m=+911.795704608" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.241553 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" podStartSLOduration=5.044818729 podStartE2EDuration="30.2415317s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.900511022 +0000 UTC m=+884.496133548" lastFinishedPulling="2026-01-27 07:31:37.097223983 +0000 UTC m=+909.692846519" observedRunningTime="2026-01-27 07:31:39.217506574 +0000 UTC m=+911.813129110" watchObservedRunningTime="2026-01-27 07:31:39.2415317 +0000 UTC m=+911.837154226" Jan 27 07:31:39 crc kubenswrapper[4764]: I0127 07:31:39.251831 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" podStartSLOduration=4.449307932 podStartE2EDuration="30.251811972s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.931816415 +0000 UTC m=+884.527438941" lastFinishedPulling="2026-01-27 07:31:37.734320455 +0000 UTC m=+910.329942981" observedRunningTime="2026-01-27 07:31:39.248224477 +0000 UTC m=+911.843847003" watchObservedRunningTime="2026-01-27 07:31:39.251811972 +0000 UTC m=+911.847434508" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.389402 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.397501 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e65a71-8e22-4256-81eb-cd9a58927e5a-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-27ncl\" (UID: \"68e65a71-8e22-4256-81eb-cd9a58927e5a\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.584646 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kx4xr" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.593261 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.694469 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.706409 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b3e29bf-af7b-4575-a91b-042b85a244c9-cert\") pod \"openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk\" (UID: \"2b3e29bf-af7b-4575-a91b-042b85a244c9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.795418 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl"] Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.880972 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rxz98" Jan 27 07:31:41 crc kubenswrapper[4764]: I0127 07:31:41.889793 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.113507 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.113876 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.121363 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-metrics-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.128101 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce339e1f-d181-42e1-bb9a-d6401699560f-webhook-certs\") pod \"openstack-operator-controller-manager-76958f4d87-pt4l2\" (UID: \"ce339e1f-d181-42e1-bb9a-d6401699560f\") " pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.209104 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk"] Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.215045 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kdw8c" Jan 27 07:31:42 crc kubenswrapper[4764]: W0127 07:31:42.218963 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3e29bf_af7b_4575_a91b_042b85a244c9.slice/crio-b4beb01da61f15ead87932db9336eeb87b4e39216602eb3734e68707d6cd2f32 WatchSource:0}: Error finding container b4beb01da61f15ead87932db9336eeb87b4e39216602eb3734e68707d6cd2f32: Status 404 returned error can't find the container with id b4beb01da61f15ead87932db9336eeb87b4e39216602eb3734e68707d6cd2f32 Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.223062 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.618241 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" event={"ID":"68e65a71-8e22-4256-81eb-cd9a58927e5a","Type":"ContainerStarted","Data":"4d43b5e8ac72d18fdbe3be82c56ff6944fd9448de6cb4356888cfdc66608565b"} Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.624676 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" event={"ID":"f72762ca-57ca-4151-aa29-f2a7db4be1f0","Type":"ContainerStarted","Data":"a147f4c7b5855d3c6f36d6e12f741ea6312f67e9e0d32b576623bf0f1ae6d9df"} Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.625239 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.626733 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" event={"ID":"2b3e29bf-af7b-4575-a91b-042b85a244c9","Type":"ContainerStarted","Data":"b4beb01da61f15ead87932db9336eeb87b4e39216602eb3734e68707d6cd2f32"} Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.648047 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" podStartSLOduration=3.372708154 podStartE2EDuration="33.648023944s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.868256235 +0000 UTC m=+884.463878761" lastFinishedPulling="2026-01-27 07:31:42.143572025 +0000 UTC m=+914.739194551" observedRunningTime="2026-01-27 07:31:42.646667258 +0000 UTC m=+915.242289794" watchObservedRunningTime="2026-01-27 07:31:42.648023944 +0000 UTC m=+915.243646470" Jan 27 07:31:42 crc kubenswrapper[4764]: I0127 07:31:42.660614 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2"] Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.636510 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" event={"ID":"ce339e1f-d181-42e1-bb9a-d6401699560f","Type":"ContainerStarted","Data":"dcc5e6b6dd7ba9aea1d4dd2b212f55ca643df0d61d307eec3fca147f75d5a47e"} Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.636848 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" event={"ID":"ce339e1f-d181-42e1-bb9a-d6401699560f","Type":"ContainerStarted","Data":"8006f3cb346728bec4bb5ff8d95420accf5a70acc356b10f6e0bce0f3b74927c"} Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.636871 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.639133 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" event={"ID":"11dbcaab-8ae4-454f-bc9a-5082597154b2","Type":"ContainerStarted","Data":"ed7b05441c3f961ea6b55e048810fa5f9445e3f34e40d066932a448888702939"} Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.639429 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.664510 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" podStartSLOduration=33.664482393 podStartE2EDuration="33.664482393s" podCreationTimestamp="2026-01-27 07:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:31:43.662065609 +0000 UTC m=+916.257688135" watchObservedRunningTime="2026-01-27 07:31:43.664482393 +0000 UTC m=+916.260104929" Jan 27 07:31:43 crc kubenswrapper[4764]: I0127 07:31:43.684643 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" podStartSLOduration=2.31965996 podStartE2EDuration="34.684620526s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:10.496741276 +0000 UTC m=+883.092363812" lastFinishedPulling="2026-01-27 07:31:42.861701852 +0000 UTC m=+915.457324378" observedRunningTime="2026-01-27 07:31:43.677500967 +0000 UTC m=+916.273123493" watchObservedRunningTime="2026-01-27 07:31:43.684620526 +0000 UTC m=+916.280243042" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.334553 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.336961 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.348793 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.445680 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.445760 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clzjd\" (UniqueName: \"kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.445824 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.547362 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.547526 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.547616 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clzjd\" (UniqueName: \"kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.547890 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.548697 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.566970 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clzjd\" (UniqueName: \"kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd\") pod \"redhat-operators-4fhfp\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:44 crc kubenswrapper[4764]: I0127 07:31:44.668281 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.453256 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.681637 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" event={"ID":"2b3e29bf-af7b-4575-a91b-042b85a244c9","Type":"ContainerStarted","Data":"fbe70aeea133f757ea388df455d7124de5e3d8aba890bb9e6ac0175605ad4756"} Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.682573 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.693550 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" event={"ID":"68e65a71-8e22-4256-81eb-cd9a58927e5a","Type":"ContainerStarted","Data":"d7c29ef1f311c8bc05b9c92956b083590a7490c4bb788763b80d3c49b84db467"} Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.695224 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.712045 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" event={"ID":"6f80a85c-409f-4e68-ab1d-8ee9bb19e544","Type":"ContainerStarted","Data":"940f0c7ab867a54340363e4113630c3523a5aaab7a2db27e4108956b311be90e"} Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.712753 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.718417 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerStarted","Data":"096448c58bfa5907c08cc7afde7f53d683d04019598fdf570a28a7738c731232"} Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.754334 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" podStartSLOduration=34.019971705 podStartE2EDuration="39.754318826s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:42.220892333 +0000 UTC m=+914.816514859" lastFinishedPulling="2026-01-27 07:31:47.955239404 +0000 UTC m=+920.550861980" observedRunningTime="2026-01-27 07:31:48.744119896 +0000 UTC m=+921.339742422" watchObservedRunningTime="2026-01-27 07:31:48.754318826 +0000 UTC m=+921.349941352" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.781100 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" podStartSLOduration=33.615415992 podStartE2EDuration="39.781080215s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:41.803117479 +0000 UTC m=+914.398740005" lastFinishedPulling="2026-01-27 07:31:47.968781702 +0000 UTC m=+920.564404228" observedRunningTime="2026-01-27 07:31:48.778310271 +0000 UTC m=+921.373932797" watchObservedRunningTime="2026-01-27 07:31:48.781080215 +0000 UTC m=+921.376702741" Jan 27 07:31:48 crc kubenswrapper[4764]: I0127 07:31:48.802982 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" podStartSLOduration=3.736710276 podStartE2EDuration="39.802963944s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.890316001 +0000 UTC m=+884.485938527" lastFinishedPulling="2026-01-27 07:31:47.956569669 +0000 UTC m=+920.552192195" observedRunningTime="2026-01-27 07:31:48.800142459 +0000 UTC m=+921.395764985" watchObservedRunningTime="2026-01-27 07:31:48.802963944 +0000 UTC m=+921.398586470" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.609682 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75b8f798ff-nl98d" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.618536 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-76d4d5b8f9-r76hr" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.646463 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5fdc687f5-cxmrb" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.676370 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84d5bb46b-bh2ct" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.702189 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hlkrc" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.726889 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" event={"ID":"01275567-9bc4-4728-98af-c399b3b386f3","Type":"ContainerStarted","Data":"edf97496149007bda7fcaadb4bbe135ee819035dce88cfe24f86a117156de5e8"} Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.727104 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.729039 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" event={"ID":"6ae3b798-c9a5-48a9-8608-af33f26cb323","Type":"ContainerStarted","Data":"a540b373eae27a756ad1637ff93e37a6fb02040308d471f0c4711506588f8f38"} Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.729241 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.731705 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" event={"ID":"e5c21e1e-f5ae-4f87-8789-2638c0b4dea1","Type":"ContainerStarted","Data":"277de32902a7b772d1b88a9d0821bcf26b9b486f1b09ee72233137d9a3aa27c6"} Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.731953 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.733450 4764 generic.go:334] "Generic (PLEG): container finished" podID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerID="cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad" exitCode=0 Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.733495 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerDied","Data":"cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad"} Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.735204 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.736130 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" event={"ID":"be2b755e-7957-421a-be94-398366a49522","Type":"ContainerStarted","Data":"c3a207e397940ad994cab42a8cb16d8038990890716fa6883b7d6619563710ae"} Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.736572 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.744394 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nrlkg" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.761387 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" podStartSLOduration=3.879702441 podStartE2EDuration="40.761368125s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.565536588 +0000 UTC m=+884.161159124" lastFinishedPulling="2026-01-27 07:31:48.447202282 +0000 UTC m=+921.042824808" observedRunningTime="2026-01-27 07:31:49.756592768 +0000 UTC m=+922.352215314" watchObservedRunningTime="2026-01-27 07:31:49.761368125 +0000 UTC m=+922.356990641" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.785501 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" podStartSLOduration=3.526427544 podStartE2EDuration="40.785484123s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.465427607 +0000 UTC m=+884.061050133" lastFinishedPulling="2026-01-27 07:31:48.724484186 +0000 UTC m=+921.320106712" observedRunningTime="2026-01-27 07:31:49.781901818 +0000 UTC m=+922.377524354" watchObservedRunningTime="2026-01-27 07:31:49.785484123 +0000 UTC m=+922.381106649" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.830233 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" podStartSLOduration=4.114295669 podStartE2EDuration="40.830215878s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.930197361 +0000 UTC m=+884.525819887" lastFinishedPulling="2026-01-27 07:31:48.64611757 +0000 UTC m=+921.241740096" observedRunningTime="2026-01-27 07:31:49.810368432 +0000 UTC m=+922.405990968" watchObservedRunningTime="2026-01-27 07:31:49.830215878 +0000 UTC m=+922.425838404" Jan 27 07:31:49 crc kubenswrapper[4764]: I0127 07:31:49.831168 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" podStartSLOduration=4.185598255 podStartE2EDuration="40.831164623s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.905598808 +0000 UTC m=+884.501221334" lastFinishedPulling="2026-01-27 07:31:48.551165176 +0000 UTC m=+921.146787702" observedRunningTime="2026-01-27 07:31:49.82954364 +0000 UTC m=+922.425166196" watchObservedRunningTime="2026-01-27 07:31:49.831164623 +0000 UTC m=+922.426787149" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.035246 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-569695f6c5-87prv" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.053817 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74ffd97575-rhpc8" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.076278 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bf4858b78-xr5lf" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.138291 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-x6k2g" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.181004 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-7748d79f84-xvlnc" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.230992 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7db57dc8bf-r9k82" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.366931 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-lcd48" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.745049 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" event={"ID":"2b6a69f3-cf3b-465a-917c-78cf3248eb58","Type":"ContainerStarted","Data":"50e0593cc73faa5ed00a5eb2f80b1e023f9573e49f0fe99d458a0fb212cb3078"} Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.746577 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.749846 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerStarted","Data":"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77"} Jan 27 07:31:50 crc kubenswrapper[4764]: I0127 07:31:50.806512 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" podStartSLOduration=3.412143805 podStartE2EDuration="41.806488163s" podCreationTimestamp="2026-01-27 07:31:09 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.533431944 +0000 UTC m=+884.129054470" lastFinishedPulling="2026-01-27 07:31:49.927776302 +0000 UTC m=+922.523398828" observedRunningTime="2026-01-27 07:31:50.760095494 +0000 UTC m=+923.355718020" watchObservedRunningTime="2026-01-27 07:31:50.806488163 +0000 UTC m=+923.402110689" Jan 27 07:31:51 crc kubenswrapper[4764]: I0127 07:31:51.756875 4764 generic.go:334] "Generic (PLEG): container finished" podID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerID="05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77" exitCode=0 Jan 27 07:31:51 crc kubenswrapper[4764]: I0127 07:31:51.756931 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerDied","Data":"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77"} Jan 27 07:31:52 crc kubenswrapper[4764]: I0127 07:31:52.228876 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76958f4d87-pt4l2" Jan 27 07:31:59 crc kubenswrapper[4764]: I0127 07:31:59.849050 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78f8b7b89c-ptrgx" Jan 27 07:31:59 crc kubenswrapper[4764]: I0127 07:31:59.925975 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78b8f8fd84-8ffvc" Jan 27 07:31:59 crc kubenswrapper[4764]: I0127 07:31:59.926482 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-58865f87b4-wpkhc" Jan 27 07:31:59 crc kubenswrapper[4764]: I0127 07:31:59.930417 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-65dhc" Jan 27 07:32:00 crc kubenswrapper[4764]: I0127 07:32:00.199135 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-65596dbf77-h5nkr" Jan 27 07:32:00 crc kubenswrapper[4764]: I0127 07:32:00.380075 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6476466c7c-2hntl" Jan 27 07:32:01 crc kubenswrapper[4764]: I0127 07:32:01.600058 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-27ncl" Jan 27 07:32:01 crc kubenswrapper[4764]: I0127 07:32:01.901770 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk" Jan 27 07:32:03 crc kubenswrapper[4764]: I0127 07:32:03.855161 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerStarted","Data":"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a"} Jan 27 07:32:03 crc kubenswrapper[4764]: I0127 07:32:03.856697 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" event={"ID":"45b5dc0d-656e-4475-9414-ac8f1e6ae767","Type":"ContainerStarted","Data":"98c49dcab5d80f35e7ca74bd118730f7aeb34c2712bc112fe634951cdaa059aa"} Jan 27 07:32:03 crc kubenswrapper[4764]: I0127 07:32:03.874953 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fhfp" podStartSLOduration=6.574478828 podStartE2EDuration="19.874935582s" podCreationTimestamp="2026-01-27 07:31:44 +0000 UTC" firstStartedPulling="2026-01-27 07:31:49.734729559 +0000 UTC m=+922.330352085" lastFinishedPulling="2026-01-27 07:32:03.035186303 +0000 UTC m=+935.630808839" observedRunningTime="2026-01-27 07:32:03.871705756 +0000 UTC m=+936.467328282" watchObservedRunningTime="2026-01-27 07:32:03.874935582 +0000 UTC m=+936.470558108" Jan 27 07:32:03 crc kubenswrapper[4764]: I0127 07:32:03.888301 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6b8j8" podStartSLOduration=2.450541608 podStartE2EDuration="53.888262155s" podCreationTimestamp="2026-01-27 07:31:10 +0000 UTC" firstStartedPulling="2026-01-27 07:31:11.889053818 +0000 UTC m=+884.484676344" lastFinishedPulling="2026-01-27 07:32:03.326774345 +0000 UTC m=+935.922396891" observedRunningTime="2026-01-27 07:32:03.887471054 +0000 UTC m=+936.483093580" watchObservedRunningTime="2026-01-27 07:32:03.888262155 +0000 UTC m=+936.483884681" Jan 27 07:32:04 crc kubenswrapper[4764]: I0127 07:32:04.668908 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:04 crc kubenswrapper[4764]: I0127 07:32:04.668953 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:05 crc kubenswrapper[4764]: I0127 07:32:05.741353 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fhfp" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="registry-server" probeResult="failure" output=< Jan 27 07:32:05 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 07:32:05 crc kubenswrapper[4764]: > Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.753253 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.756654 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.762871 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.792824 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.793264 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.793296 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmghv\" (UniqueName: \"kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.894416 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.894506 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.894530 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmghv\" (UniqueName: \"kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.895108 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.895123 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:08 crc kubenswrapper[4764]: I0127 07:32:08.916292 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmghv\" (UniqueName: \"kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv\") pod \"redhat-marketplace-xhssh\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:09 crc kubenswrapper[4764]: I0127 07:32:09.084241 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:09 crc kubenswrapper[4764]: I0127 07:32:09.579558 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:09 crc kubenswrapper[4764]: I0127 07:32:09.898093 4764 generic.go:334] "Generic (PLEG): container finished" podID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerID="879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672" exitCode=0 Jan 27 07:32:09 crc kubenswrapper[4764]: I0127 07:32:09.898141 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerDied","Data":"879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672"} Jan 27 07:32:09 crc kubenswrapper[4764]: I0127 07:32:09.898168 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerStarted","Data":"3c131a261f2ffd4f0d4eb88516559f6476ff3d2cd67f0d4cbc38f7102950707c"} Jan 27 07:32:11 crc kubenswrapper[4764]: I0127 07:32:11.915490 4764 generic.go:334] "Generic (PLEG): container finished" podID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerID="1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556" exitCode=0 Jan 27 07:32:11 crc kubenswrapper[4764]: I0127 07:32:11.915679 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerDied","Data":"1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556"} Jan 27 07:32:12 crc kubenswrapper[4764]: I0127 07:32:12.926603 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerStarted","Data":"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4"} Jan 27 07:32:12 crc kubenswrapper[4764]: I0127 07:32:12.961770 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xhssh" podStartSLOduration=2.426363903 podStartE2EDuration="4.961751336s" podCreationTimestamp="2026-01-27 07:32:08 +0000 UTC" firstStartedPulling="2026-01-27 07:32:09.899825038 +0000 UTC m=+942.495447564" lastFinishedPulling="2026-01-27 07:32:12.435212461 +0000 UTC m=+945.030834997" observedRunningTime="2026-01-27 07:32:12.955546461 +0000 UTC m=+945.551169007" watchObservedRunningTime="2026-01-27 07:32:12.961751336 +0000 UTC m=+945.557373862" Jan 27 07:32:14 crc kubenswrapper[4764]: I0127 07:32:14.725330 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:14 crc kubenswrapper[4764]: I0127 07:32:14.779974 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:15 crc kubenswrapper[4764]: I0127 07:32:15.121669 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:32:15 crc kubenswrapper[4764]: I0127 07:32:15.947733 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fhfp" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="registry-server" containerID="cri-o://f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a" gracePeriod=2 Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.343517 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.518873 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content\") pod \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.518981 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities\") pod \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.519046 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clzjd\" (UniqueName: \"kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd\") pod \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\" (UID: \"b28197cc-06c5-4fcb-8ea5-2a698afd9f67\") " Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.520189 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities" (OuterVolumeSpecName: "utilities") pod "b28197cc-06c5-4fcb-8ea5-2a698afd9f67" (UID: "b28197cc-06c5-4fcb-8ea5-2a698afd9f67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.531623 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd" (OuterVolumeSpecName: "kube-api-access-clzjd") pod "b28197cc-06c5-4fcb-8ea5-2a698afd9f67" (UID: "b28197cc-06c5-4fcb-8ea5-2a698afd9f67"). InnerVolumeSpecName "kube-api-access-clzjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.623687 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.624241 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clzjd\" (UniqueName: \"kubernetes.io/projected/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-kube-api-access-clzjd\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.671336 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b28197cc-06c5-4fcb-8ea5-2a698afd9f67" (UID: "b28197cc-06c5-4fcb-8ea5-2a698afd9f67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.726184 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28197cc-06c5-4fcb-8ea5-2a698afd9f67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.958099 4764 generic.go:334] "Generic (PLEG): container finished" podID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerID="f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a" exitCode=0 Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.958144 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerDied","Data":"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a"} Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.958176 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fhfp" event={"ID":"b28197cc-06c5-4fcb-8ea5-2a698afd9f67","Type":"ContainerDied","Data":"096448c58bfa5907c08cc7afde7f53d683d04019598fdf570a28a7738c731232"} Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.958197 4764 scope.go:117] "RemoveContainer" containerID="f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.958205 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fhfp" Jan 27 07:32:16 crc kubenswrapper[4764]: I0127 07:32:16.986258 4764 scope.go:117] "RemoveContainer" containerID="05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.008904 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.017331 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fhfp"] Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.028638 4764 scope.go:117] "RemoveContainer" containerID="cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.048619 4764 scope.go:117] "RemoveContainer" containerID="f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a" Jan 27 07:32:17 crc kubenswrapper[4764]: E0127 07:32:17.049173 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a\": container with ID starting with f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a not found: ID does not exist" containerID="f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.049216 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a"} err="failed to get container status \"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a\": rpc error: code = NotFound desc = could not find container \"f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a\": container with ID starting with f04d2462d91f00102d3963339e27a54448b378bfcbe1ca01b95b4a90b12f441a not found: ID does not exist" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.049243 4764 scope.go:117] "RemoveContainer" containerID="05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77" Jan 27 07:32:17 crc kubenswrapper[4764]: E0127 07:32:17.049616 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77\": container with ID starting with 05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77 not found: ID does not exist" containerID="05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.049637 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77"} err="failed to get container status \"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77\": rpc error: code = NotFound desc = could not find container \"05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77\": container with ID starting with 05fe99890d8ee9c34a4346d0516e43ce69fe9c5ec3e9662f0420fc69f2730a77 not found: ID does not exist" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.049651 4764 scope.go:117] "RemoveContainer" containerID="cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad" Jan 27 07:32:17 crc kubenswrapper[4764]: E0127 07:32:17.049878 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad\": container with ID starting with cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad not found: ID does not exist" containerID="cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad" Jan 27 07:32:17 crc kubenswrapper[4764]: I0127 07:32:17.049900 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad"} err="failed to get container status \"cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad\": rpc error: code = NotFound desc = could not find container \"cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad\": container with ID starting with cbd117abd8e367f3cb35f23c650bdf2c118811c793eb10acf152bb40bc6e46ad not found: ID does not exist" Jan 27 07:32:18 crc kubenswrapper[4764]: I0127 07:32:18.458066 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" path="/var/lib/kubelet/pods/b28197cc-06c5-4fcb-8ea5-2a698afd9f67/volumes" Jan 27 07:32:19 crc kubenswrapper[4764]: I0127 07:32:19.085021 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:19 crc kubenswrapper[4764]: I0127 07:32:19.085089 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:19 crc kubenswrapper[4764]: I0127 07:32:19.159354 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:20 crc kubenswrapper[4764]: I0127 07:32:20.046749 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:20 crc kubenswrapper[4764]: I0127 07:32:20.518152 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.474967 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:21 crc kubenswrapper[4764]: E0127 07:32:21.475630 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="registry-server" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.475645 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="registry-server" Jan 27 07:32:21 crc kubenswrapper[4764]: E0127 07:32:21.475661 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="extract-utilities" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.475667 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="extract-utilities" Jan 27 07:32:21 crc kubenswrapper[4764]: E0127 07:32:21.475676 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="extract-content" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.475682 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="extract-content" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.475812 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28197cc-06c5-4fcb-8ea5-2a698afd9f67" containerName="registry-server" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.476657 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.482040 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.482062 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.482419 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jjssp" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.484190 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.501205 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.539607 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.540968 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.543037 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.554158 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.674082 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvk9\" (UniqueName: \"kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.674144 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllt6\" (UniqueName: \"kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.674198 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.674279 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.674299 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.775566 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.775658 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.775685 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.775759 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvk9\" (UniqueName: \"kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.775787 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllt6\" (UniqueName: \"kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.776663 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.776707 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.776666 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.799383 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvk9\" (UniqueName: \"kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9\") pod \"dnsmasq-dns-5f854695bc-65kgg\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.799518 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllt6\" (UniqueName: \"kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6\") pod \"dnsmasq-dns-84bb9d8bd9-xhqjz\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.800589 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.866560 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:21 crc kubenswrapper[4764]: I0127 07:32:21.998714 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xhssh" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="registry-server" containerID="cri-o://da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4" gracePeriod=2 Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.077510 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:22 crc kubenswrapper[4764]: W0127 07:32:22.096589 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfec45bf1_8fae_4d97_85ac_c01b5b709af9.slice/crio-bbe0c1dcdc52fe990f03953af898962410f1662f1748b574116a7ed9d25eb2c1 WatchSource:0}: Error finding container bbe0c1dcdc52fe990f03953af898962410f1662f1748b574116a7ed9d25eb2c1: Status 404 returned error can't find the container with id bbe0c1dcdc52fe990f03953af898962410f1662f1748b574116a7ed9d25eb2c1 Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.321641 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.395141 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.489197 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmghv\" (UniqueName: \"kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv\") pod \"4c644818-1c75-4275-83aa-b24cf72ed05c\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.489278 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities\") pod \"4c644818-1c75-4275-83aa-b24cf72ed05c\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.489317 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content\") pod \"4c644818-1c75-4275-83aa-b24cf72ed05c\" (UID: \"4c644818-1c75-4275-83aa-b24cf72ed05c\") " Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.492239 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities" (OuterVolumeSpecName: "utilities") pod "4c644818-1c75-4275-83aa-b24cf72ed05c" (UID: "4c644818-1c75-4275-83aa-b24cf72ed05c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.505554 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv" (OuterVolumeSpecName: "kube-api-access-hmghv") pod "4c644818-1c75-4275-83aa-b24cf72ed05c" (UID: "4c644818-1c75-4275-83aa-b24cf72ed05c"). InnerVolumeSpecName "kube-api-access-hmghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.543830 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c644818-1c75-4275-83aa-b24cf72ed05c" (UID: "4c644818-1c75-4275-83aa-b24cf72ed05c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.591180 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.591563 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c644818-1c75-4275-83aa-b24cf72ed05c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:22 crc kubenswrapper[4764]: I0127 07:32:22.591626 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmghv\" (UniqueName: \"kubernetes.io/projected/4c644818-1c75-4275-83aa-b24cf72ed05c-kube-api-access-hmghv\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.015853 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" event={"ID":"fec45bf1-8fae-4d97-85ac-c01b5b709af9","Type":"ContainerStarted","Data":"bbe0c1dcdc52fe990f03953af898962410f1662f1748b574116a7ed9d25eb2c1"} Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.018718 4764 generic.go:334] "Generic (PLEG): container finished" podID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerID="da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4" exitCode=0 Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.018857 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerDied","Data":"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4"} Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.020015 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhssh" event={"ID":"4c644818-1c75-4275-83aa-b24cf72ed05c","Type":"ContainerDied","Data":"3c131a261f2ffd4f0d4eb88516559f6476ff3d2cd67f0d4cbc38f7102950707c"} Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.020090 4764 scope.go:117] "RemoveContainer" containerID="da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.020165 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhssh" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.022801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" event={"ID":"e1399aa2-374e-4f4c-bd18-43358d7283b6","Type":"ContainerStarted","Data":"2827745a784d1f5043a974e95153e2ccc95f3ea874cd45c2baba54269b0b66f1"} Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.056524 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.058030 4764 scope.go:117] "RemoveContainer" containerID="1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.062323 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhssh"] Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.102096 4764 scope.go:117] "RemoveContainer" containerID="879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.129815 4764 scope.go:117] "RemoveContainer" containerID="da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4" Jan 27 07:32:23 crc kubenswrapper[4764]: E0127 07:32:23.130181 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4\": container with ID starting with da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4 not found: ID does not exist" containerID="da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.130217 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4"} err="failed to get container status \"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4\": rpc error: code = NotFound desc = could not find container \"da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4\": container with ID starting with da9ac6860baaacd1c4e6907fc3eb415c20753d0e7ecb970bcc8806492b8c02f4 not found: ID does not exist" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.130239 4764 scope.go:117] "RemoveContainer" containerID="1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556" Jan 27 07:32:23 crc kubenswrapper[4764]: E0127 07:32:23.130645 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556\": container with ID starting with 1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556 not found: ID does not exist" containerID="1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.130666 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556"} err="failed to get container status \"1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556\": rpc error: code = NotFound desc = could not find container \"1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556\": container with ID starting with 1bc28496d44a49ab36b7417cf13a9b8b86ea1ea15e9e97efc945a1c0412ae556 not found: ID does not exist" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.130679 4764 scope.go:117] "RemoveContainer" containerID="879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672" Jan 27 07:32:23 crc kubenswrapper[4764]: E0127 07:32:23.130981 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672\": container with ID starting with 879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672 not found: ID does not exist" containerID="879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672" Jan 27 07:32:23 crc kubenswrapper[4764]: I0127 07:32:23.131013 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672"} err="failed to get container status \"879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672\": rpc error: code = NotFound desc = could not find container \"879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672\": container with ID starting with 879eaaddc7172e901533b25e8331cdd90b5086e427661b0f27d69df92e833672 not found: ID does not exist" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.460897 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" path="/var/lib/kubelet/pods/4c644818-1c75-4275-83aa-b24cf72ed05c/volumes" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.510430 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.536861 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:24 crc kubenswrapper[4764]: E0127 07:32:24.537128 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="extract-utilities" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.537144 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="extract-utilities" Jan 27 07:32:24 crc kubenswrapper[4764]: E0127 07:32:24.537155 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="registry-server" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.537164 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="registry-server" Jan 27 07:32:24 crc kubenswrapper[4764]: E0127 07:32:24.537192 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="extract-content" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.537197 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="extract-content" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.537326 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c644818-1c75-4275-83aa-b24cf72ed05c" containerName="registry-server" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.538024 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.554239 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.623203 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.623256 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.623287 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrs5\" (UniqueName: \"kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.725786 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.725840 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.725898 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrs5\" (UniqueName: \"kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.728014 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.728410 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.758773 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrs5\" (UniqueName: \"kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5\") pod \"dnsmasq-dns-744ffd65bc-vcrbq\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.847197 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.857538 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.878002 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.879542 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.899924 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.933103 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.933517 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t852\" (UniqueName: \"kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:24 crc kubenswrapper[4764]: I0127 07:32:24.933562 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.044944 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.044983 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t852\" (UniqueName: \"kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.045035 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.045911 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.048864 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.095239 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t852\" (UniqueName: \"kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852\") pod \"dnsmasq-dns-95f5f6995-pnwl6\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.256989 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.272218 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.679522 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.682779 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.690250 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.690289 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.690296 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.690667 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.690997 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7cpdn" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.691317 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.691530 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.698618 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.759934 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857325 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857375 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857419 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857454 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857480 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8bn\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857499 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857513 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857530 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857753 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857769 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.857810 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962304 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962352 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962377 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962415 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962430 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962468 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr8bn\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962483 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962497 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962514 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962550 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.962577 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.963532 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.964844 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.964874 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.964955 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.965235 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.965979 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.969485 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.975811 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.979259 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:25 crc kubenswrapper[4764]: I0127 07:32:25.986932 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.000697 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr8bn\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.008032 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " pod="openstack/rabbitmq-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.026593 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.034872 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.049358 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.049466 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.049904 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.049980 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z484r" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.050127 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.050229 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.050398 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.064790 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.092465 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" event={"ID":"f35324f3-25dd-4b25-8932-1d02eddcdd15","Type":"ContainerStarted","Data":"32f397f354b20df456afa2c02b67492f85d27b2379bcc68191e586ca0ee1eeea"} Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.095491 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" event={"ID":"6ab25637-135f-4da6-8afb-6afa49e80ae9","Type":"ContainerStarted","Data":"41a5b68f0f6da5ce3dd1508c1d2ba71b2ab28e325ad08580474f083f347ebf64"} Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181783 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181805 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181852 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181897 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181935 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181963 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.181989 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.182026 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.182064 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5ht\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.182109 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283284 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283340 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283371 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283396 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5ht\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283450 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283520 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283551 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283572 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283604 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283659 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283685 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283957 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.283999 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.284903 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.286510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.286665 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.287083 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.289189 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.297702 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.298567 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.298688 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.304851 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5ht\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.306924 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.326156 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:26 crc kubenswrapper[4764]: I0127 07:32:26.377106 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.143203 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.146600 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.148821 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.148844 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-d4tj4" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.149879 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.152960 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.153345 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.167493 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.305970 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-kolla-config\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306032 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5gm\" (UniqueName: \"kubernetes.io/projected/767ab4c4-b54f-448f-af5a-b4d07b433023-kube-api-access-fw5gm\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306056 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-default\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306100 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306121 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306155 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-operator-scripts\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306174 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-generated\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.306189 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.407950 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408328 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408279 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-operator-scripts\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408519 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-generated\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408538 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408564 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-kolla-config\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408612 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5gm\" (UniqueName: \"kubernetes.io/projected/767ab4c4-b54f-448f-af5a-b4d07b433023-kube-api-access-fw5gm\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.408635 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-default\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.409675 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-default\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.409806 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/767ab4c4-b54f-448f-af5a-b4d07b433023-config-data-generated\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.409942 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-kolla-config\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.411647 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767ab4c4-b54f-448f-af5a-b4d07b433023-operator-scripts\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.412972 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.417231 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767ab4c4-b54f-448f-af5a-b4d07b433023-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.435713 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.437398 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5gm\" (UniqueName: \"kubernetes.io/projected/767ab4c4-b54f-448f-af5a-b4d07b433023-kube-api-access-fw5gm\") pod \"openstack-galera-0\" (UID: \"767ab4c4-b54f-448f-af5a-b4d07b433023\") " pod="openstack/openstack-galera-0" Jan 27 07:32:27 crc kubenswrapper[4764]: I0127 07:32:27.479573 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.501766 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.504024 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.508492 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.508598 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.508492 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qm6gl" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.508787 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.520585 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634358 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634483 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634508 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634565 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634740 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634760 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4df\" (UniqueName: \"kubernetes.io/projected/ddcf25f7-570e-4d97-9109-9331ba1286a0-kube-api-access-9w4df\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.634777 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736044 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736180 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736212 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4df\" (UniqueName: \"kubernetes.io/projected/ddcf25f7-570e-4d97-9109-9331ba1286a0-kube-api-access-9w4df\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736237 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736287 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736312 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736342 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.736362 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.737566 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.737775 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.738956 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.739304 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.739332 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ddcf25f7-570e-4d97-9109-9331ba1286a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.742391 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.743523 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcf25f7-570e-4d97-9109-9331ba1286a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.764244 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4df\" (UniqueName: \"kubernetes.io/projected/ddcf25f7-570e-4d97-9109-9331ba1286a0-kube-api-access-9w4df\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.773073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ddcf25f7-570e-4d97-9109-9331ba1286a0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.808179 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.809742 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.814127 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.814153 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.816053 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vnfmj" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.836075 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.849455 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.939626 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.939677 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.939724 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwt9\" (UniqueName: \"kubernetes.io/projected/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kube-api-access-7lwt9\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.939743 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kolla-config\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:28 crc kubenswrapper[4764]: I0127 07:32:28.939782 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-config-data\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.044257 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.044319 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.044376 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwt9\" (UniqueName: \"kubernetes.io/projected/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kube-api-access-7lwt9\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.044398 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kolla-config\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.044471 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-config-data\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.045316 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-config-data\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.045334 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kolla-config\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.050805 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.068448 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwt9\" (UniqueName: \"kubernetes.io/projected/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-kube-api-access-7lwt9\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.068653 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9689ecb1-cfaf-4f78-aa32-ca09875bfe4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f\") " pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.140585 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.626517 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.628410 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.635295 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.755839 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zt8\" (UniqueName: \"kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.755910 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.756127 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.858131 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zt8\" (UniqueName: \"kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.858187 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.858244 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.858834 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.859149 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.878164 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zt8\" (UniqueName: \"kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8\") pod \"certified-operators-6f4qf\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:29 crc kubenswrapper[4764]: I0127 07:32:29.955122 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.117907 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.119415 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.121392 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7j5v9" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.134665 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.178005 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7lm\" (UniqueName: \"kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm\") pod \"kube-state-metrics-0\" (UID: \"76322b36-0480-4d14-8148-67e63de915fe\") " pod="openstack/kube-state-metrics-0" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.280076 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7lm\" (UniqueName: \"kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm\") pod \"kube-state-metrics-0\" (UID: \"76322b36-0480-4d14-8148-67e63de915fe\") " pod="openstack/kube-state-metrics-0" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.312354 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7lm\" (UniqueName: \"kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm\") pod \"kube-state-metrics-0\" (UID: \"76322b36-0480-4d14-8148-67e63de915fe\") " pod="openstack/kube-state-metrics-0" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.445734 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:32:31 crc kubenswrapper[4764]: I0127 07:32:31.946118 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.002082 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-grw8j"] Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.028434 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grw8j"] Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.028596 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.106359 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-utilities\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.106489 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4d27\" (UniqueName: \"kubernetes.io/projected/13cf6436-1dfd-49fa-b548-5c2e5d746e81-kube-api-access-x4d27\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.106725 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-catalog-content\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.208570 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4d27\" (UniqueName: \"kubernetes.io/projected/13cf6436-1dfd-49fa-b548-5c2e5d746e81-kube-api-access-x4d27\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.208672 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-catalog-content\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.208698 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-utilities\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.209202 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-utilities\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.209686 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cf6436-1dfd-49fa-b548-5c2e5d746e81-catalog-content\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.229349 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4d27\" (UniqueName: \"kubernetes.io/projected/13cf6436-1dfd-49fa-b548-5c2e5d746e81-kube-api-access-x4d27\") pod \"community-operators-grw8j\" (UID: \"13cf6436-1dfd-49fa-b548-5c2e5d746e81\") " pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:32 crc kubenswrapper[4764]: I0127 07:32:32.358138 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.114697 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rw2w4"] Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.116355 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.122706 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.122757 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-75gxq"] Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.123060 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.123548 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jnc99" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.126375 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.130324 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4"] Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.144360 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-75gxq"] Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244342 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244557 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-ovn-controller-tls-certs\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244584 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-lib\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244709 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-run\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244742 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2d1dfce-f31e-412a-af93-ad96fa2f3650-scripts\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244780 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lf9c\" (UniqueName: \"kubernetes.io/projected/d2d1dfce-f31e-412a-af93-ad96fa2f3650-kube-api-access-8lf9c\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244803 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthlq\" (UniqueName: \"kubernetes.io/projected/cad0e5a9-459c-4f9b-865b-ddc533316170-kube-api-access-tthlq\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244865 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-log\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244889 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-etc-ovs\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244937 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-combined-ca-bundle\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.244973 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-log-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.245040 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.245055 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad0e5a9-459c-4f9b-865b-ddc533316170-scripts\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.346865 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-run\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347379 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lf9c\" (UniqueName: \"kubernetes.io/projected/d2d1dfce-f31e-412a-af93-ad96fa2f3650-kube-api-access-8lf9c\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347407 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2d1dfce-f31e-412a-af93-ad96fa2f3650-scripts\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347449 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthlq\" (UniqueName: \"kubernetes.io/projected/cad0e5a9-459c-4f9b-865b-ddc533316170-kube-api-access-tthlq\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347472 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-log\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347494 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-etc-ovs\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347512 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-combined-ca-bundle\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347541 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-log-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347572 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347635 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad0e5a9-459c-4f9b-865b-ddc533316170-scripts\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347686 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347711 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-ovn-controller-tls-certs\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347728 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-lib\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347880 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-lib\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.347339 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-run\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.348667 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.348973 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-var-log\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.349027 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-run\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.349358 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d2d1dfce-f31e-412a-af93-ad96fa2f3650-etc-ovs\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.350474 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cad0e5a9-459c-4f9b-865b-ddc533316170-var-log-ovn\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.351129 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cad0e5a9-459c-4f9b-865b-ddc533316170-scripts\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.353662 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2d1dfce-f31e-412a-af93-ad96fa2f3650-scripts\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.359454 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-combined-ca-bundle\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.360152 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad0e5a9-459c-4f9b-865b-ddc533316170-ovn-controller-tls-certs\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.364420 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthlq\" (UniqueName: \"kubernetes.io/projected/cad0e5a9-459c-4f9b-865b-ddc533316170-kube-api-access-tthlq\") pod \"ovn-controller-rw2w4\" (UID: \"cad0e5a9-459c-4f9b-865b-ddc533316170\") " pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.366663 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lf9c\" (UniqueName: \"kubernetes.io/projected/d2d1dfce-f31e-412a-af93-ad96fa2f3650-kube-api-access-8lf9c\") pod \"ovn-controller-ovs-75gxq\" (UID: \"d2d1dfce-f31e-412a-af93-ad96fa2f3650\") " pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.439028 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:34 crc kubenswrapper[4764]: I0127 07:32:34.468557 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:35 crc kubenswrapper[4764]: I0127 07:32:35.815141 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:32:36 crc kubenswrapper[4764]: I0127 07:32:36.213318 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f","Type":"ContainerStarted","Data":"a6163b4aaf0f92f25a025ebcd10ed7772ad6cbccdb443ea9fc374a081b41c646"} Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.635004 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.636599 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.638358 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.639773 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.639948 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.640135 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.640301 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-m2zjm" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.643092 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702581 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702635 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-config\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702653 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702669 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702834 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72plm\" (UniqueName: \"kubernetes.io/projected/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-kube-api-access-72plm\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702906 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.702944 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.703049 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806223 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806826 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-config\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806854 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806882 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806937 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72plm\" (UniqueName: \"kubernetes.io/projected/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-kube-api-access-72plm\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.806979 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.807015 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.807082 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.807801 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.808137 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-config\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.808489 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.808545 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.826199 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.828334 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72plm\" (UniqueName: \"kubernetes.io/projected/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-kube-api-access-72plm\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.829212 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.830822 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.832571 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.844566 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zptb2" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.844755 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.845560 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.845749 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.846779 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.847876 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.848850 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a\") " pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908331 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908376 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908407 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908432 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908483 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908509 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908547 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5g6l\" (UniqueName: \"kubernetes.io/projected/1aef55ab-9f36-4eb1-8556-27e2136d1725-kube-api-access-d5g6l\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.908580 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-config\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:37 crc kubenswrapper[4764]: I0127 07:32:37.994410 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009419 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009494 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009524 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009587 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009621 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009649 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009688 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5g6l\" (UniqueName: \"kubernetes.io/projected/1aef55ab-9f36-4eb1-8556-27e2136d1725-kube-api-access-d5g6l\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.009807 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-config\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.010250 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.010884 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.012018 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.012390 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aef55ab-9f36-4eb1-8556-27e2136d1725-config\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.018836 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.024133 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.024788 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef55ab-9f36-4eb1-8556-27e2136d1725-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.027192 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5g6l\" (UniqueName: \"kubernetes.io/projected/1aef55ab-9f36-4eb1-8556-27e2136d1725-kube-api-access-d5g6l\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.033930 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1aef55ab-9f36-4eb1-8556-27e2136d1725\") " pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:38 crc kubenswrapper[4764]: I0127 07:32:38.205894 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 07:32:41 crc kubenswrapper[4764]: I0127 07:32:41.199648 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 07:32:41 crc kubenswrapper[4764]: I0127 07:32:41.247830 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerStarted","Data":"28fa5cd75a0c0b748101f8477d376cbd49b6a208dc574768967f3ced726be3f8"} Jan 27 07:32:41 crc kubenswrapper[4764]: W0127 07:32:41.698111 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod767ab4c4_b54f_448f_af5a_b4d07b433023.slice/crio-f2878d0fdd133aa0e752b01f9225ec6153fce8de0b67f36d0cf407b1a5399b65 WatchSource:0}: Error finding container f2878d0fdd133aa0e752b01f9225ec6153fce8de0b67f36d0cf407b1a5399b65: Status 404 returned error can't find the container with id f2878d0fdd133aa0e752b01f9225ec6153fce8de0b67f36d0cf407b1a5399b65 Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.756679 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.756980 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lllt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-xhqjz_openstack(fec45bf1-8fae-4d97-85ac-c01b5b709af9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.758485 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" podUID="fec45bf1-8fae-4d97-85ac-c01b5b709af9" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.758784 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.758914 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghrs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-vcrbq_openstack(6ab25637-135f-4da6-8afb-6afa49e80ae9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.760415 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.766801 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.767004 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smvk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-65kgg_openstack(e1399aa2-374e-4f4c-bd18-43358d7283b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:32:41 crc kubenswrapper[4764]: E0127 07:32:41.768168 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" podUID="e1399aa2-374e-4f4c-bd18-43358d7283b6" Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.276656 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.277771 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"767ab4c4-b54f-448f-af5a-b4d07b433023","Type":"ContainerStarted","Data":"f2878d0fdd133aa0e752b01f9225ec6153fce8de0b67f36d0cf407b1a5399b65"} Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.284005 4764 generic.go:334] "Generic (PLEG): container finished" podID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerID="4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f" exitCode=0 Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.285227 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" event={"ID":"f35324f3-25dd-4b25-8932-1d02eddcdd15","Type":"ContainerDied","Data":"4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f"} Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.286026 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.293510 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:32:42 crc kubenswrapper[4764]: W0127 07:32:42.298167 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76322b36_0480_4d14_8148_67e63de915fe.slice/crio-625d4ce34f009b633058ab873d7b5ab5ea269d71db0272049a0c717e2ba292cb WatchSource:0}: Error finding container 625d4ce34f009b633058ab873d7b5ab5ea269d71db0272049a0c717e2ba292cb: Status 404 returned error can't find the container with id 625d4ce34f009b633058ab873d7b5ab5ea269d71db0272049a0c717e2ba292cb Jan 27 07:32:42 crc kubenswrapper[4764]: W0127 07:32:42.305089 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff27bbf_49bf_4af7_aedb_e59e84269af3.slice/crio-b1b8210521394b647bf3d6f0b153a2392a902f22622166f7cdfd1da1c7e204a5 WatchSource:0}: Error finding container b1b8210521394b647bf3d6f0b153a2392a902f22622166f7cdfd1da1c7e204a5: Status 404 returned error can't find the container with id b1b8210521394b647bf3d6f0b153a2392a902f22622166f7cdfd1da1c7e204a5 Jan 27 07:32:42 crc kubenswrapper[4764]: W0127 07:32:42.321461 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7957b4f6_f6e6_4655_8001_6f2c05c995bd.slice/crio-6ab31300522b8510f3d8b756b38c9e40d768b6ff7de95811a61c2bcabeb39bf6 WatchSource:0}: Error finding container 6ab31300522b8510f3d8b756b38c9e40d768b6ff7de95811a61c2bcabeb39bf6: Status 404 returned error can't find the container with id 6ab31300522b8510f3d8b756b38c9e40d768b6ff7de95811a61c2bcabeb39bf6 Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.474321 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.516037 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grw8j"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.657920 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.688777 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.823924 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.945898 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:42 crc kubenswrapper[4764]: I0127 07:32:42.957998 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:42 crc kubenswrapper[4764]: E0127 07:32:42.966006 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cf6436_1dfd_49fa_b548_5c2e5d746e81.slice/crio-conmon-253722be180a200f50c81e8ec7e5ec610382863754f12a273e40810ab92908a1.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.128326 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smvk9\" (UniqueName: \"kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9\") pod \"e1399aa2-374e-4f4c-bd18-43358d7283b6\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.128429 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config\") pod \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.128645 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lllt6\" (UniqueName: \"kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6\") pod \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\" (UID: \"fec45bf1-8fae-4d97-85ac-c01b5b709af9\") " Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.128838 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc\") pod \"e1399aa2-374e-4f4c-bd18-43358d7283b6\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.128871 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config\") pod \"e1399aa2-374e-4f4c-bd18-43358d7283b6\" (UID: \"e1399aa2-374e-4f4c-bd18-43358d7283b6\") " Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.129037 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config" (OuterVolumeSpecName: "config") pod "fec45bf1-8fae-4d97-85ac-c01b5b709af9" (UID: "fec45bf1-8fae-4d97-85ac-c01b5b709af9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.129357 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config" (OuterVolumeSpecName: "config") pod "e1399aa2-374e-4f4c-bd18-43358d7283b6" (UID: "e1399aa2-374e-4f4c-bd18-43358d7283b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.129386 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1399aa2-374e-4f4c-bd18-43358d7283b6" (UID: "e1399aa2-374e-4f4c-bd18-43358d7283b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.129478 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec45bf1-8fae-4d97-85ac-c01b5b709af9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.135348 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6" (OuterVolumeSpecName: "kube-api-access-lllt6") pod "fec45bf1-8fae-4d97-85ac-c01b5b709af9" (UID: "fec45bf1-8fae-4d97-85ac-c01b5b709af9"). InnerVolumeSpecName "kube-api-access-lllt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.137195 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9" (OuterVolumeSpecName: "kube-api-access-smvk9") pod "e1399aa2-374e-4f4c-bd18-43358d7283b6" (UID: "e1399aa2-374e-4f4c-bd18-43358d7283b6"). InnerVolumeSpecName "kube-api-access-smvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.231297 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lllt6\" (UniqueName: \"kubernetes.io/projected/fec45bf1-8fae-4d97-85ac-c01b5b709af9-kube-api-access-lllt6\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.231340 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.231352 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1399aa2-374e-4f4c-bd18-43358d7283b6-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.231363 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smvk9\" (UniqueName: \"kubernetes.io/projected/e1399aa2-374e-4f4c-bd18-43358d7283b6-kube-api-access-smvk9\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.295569 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4" event={"ID":"cad0e5a9-459c-4f9b-865b-ddc533316170","Type":"ContainerStarted","Data":"ef5a6726abab767451789cef775bf51fd74449297b1802298e2a4c939c34c3b5"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.297703 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"76322b36-0480-4d14-8148-67e63de915fe","Type":"ContainerStarted","Data":"625d4ce34f009b633058ab873d7b5ab5ea269d71db0272049a0c717e2ba292cb"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.301548 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" event={"ID":"f35324f3-25dd-4b25-8932-1d02eddcdd15","Type":"ContainerStarted","Data":"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.301624 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.309709 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a","Type":"ContainerStarted","Data":"684deff398ecf6ca653e565a5ddc4cb2dc226bb794df776c28ba9b45365be46f"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.311951 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-75gxq"] Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.324295 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" podStartSLOduration=3.2164350600000002 podStartE2EDuration="19.324274371s" podCreationTimestamp="2026-01-27 07:32:24 +0000 UTC" firstStartedPulling="2026-01-27 07:32:25.778280894 +0000 UTC m=+958.373903410" lastFinishedPulling="2026-01-27 07:32:41.886120195 +0000 UTC m=+974.481742721" observedRunningTime="2026-01-27 07:32:43.322892095 +0000 UTC m=+975.918514611" watchObservedRunningTime="2026-01-27 07:32:43.324274371 +0000 UTC m=+975.919896897" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.325724 4764 generic.go:334] "Generic (PLEG): container finished" podID="13cf6436-1dfd-49fa-b548-5c2e5d746e81" containerID="253722be180a200f50c81e8ec7e5ec610382863754f12a273e40810ab92908a1" exitCode=0 Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.325848 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grw8j" event={"ID":"13cf6436-1dfd-49fa-b548-5c2e5d746e81","Type":"ContainerDied","Data":"253722be180a200f50c81e8ec7e5ec610382863754f12a273e40810ab92908a1"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.325903 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grw8j" event={"ID":"13cf6436-1dfd-49fa-b548-5c2e5d746e81","Type":"ContainerStarted","Data":"9ce6c7aaf51fedeb8f839dcb9984078a8de5099802bab00f960303203481c637"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.328143 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ddcf25f7-570e-4d97-9109-9331ba1286a0","Type":"ContainerStarted","Data":"f312c30f5ad3af6a9ac5c651b4359e0ac2edc3508cf212a73076c592b1fe121c"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.330862 4764 generic.go:334] "Generic (PLEG): container finished" podID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerID="16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0" exitCode=0 Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.330918 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" event={"ID":"6ab25637-135f-4da6-8afb-6afa49e80ae9","Type":"ContainerDied","Data":"16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.333852 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" event={"ID":"e1399aa2-374e-4f4c-bd18-43358d7283b6","Type":"ContainerDied","Data":"2827745a784d1f5043a974e95153e2ccc95f3ea874cd45c2baba54269b0b66f1"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.333942 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-65kgg" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.341126 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerStarted","Data":"b1b8210521394b647bf3d6f0b153a2392a902f22622166f7cdfd1da1c7e204a5"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.343365 4764 generic.go:334] "Generic (PLEG): container finished" podID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerID="de8c988b30ed60aa6ae07e7ab6662bcfe7f0b60983f25f289d4eae3d2614f3ac" exitCode=0 Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.343419 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerDied","Data":"de8c988b30ed60aa6ae07e7ab6662bcfe7f0b60983f25f289d4eae3d2614f3ac"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.343491 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerStarted","Data":"6ab31300522b8510f3d8b756b38c9e40d768b6ff7de95811a61c2bcabeb39bf6"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.350216 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" event={"ID":"fec45bf1-8fae-4d97-85ac-c01b5b709af9","Type":"ContainerDied","Data":"bbe0c1dcdc52fe990f03953af898962410f1662f1748b574116a7ed9d25eb2c1"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.350307 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xhqjz" Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.354253 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1aef55ab-9f36-4eb1-8556-27e2136d1725","Type":"ContainerStarted","Data":"4b746dd563133cd5aaa06aa88abdd07ea055b6ed5debead4ee737465a893a82e"} Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.436027 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.455553 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xhqjz"] Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.470580 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:43 crc kubenswrapper[4764]: I0127 07:32:43.494797 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-65kgg"] Jan 27 07:32:43 crc kubenswrapper[4764]: W0127 07:32:43.644618 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2d1dfce_f31e_412a_af93_ad96fa2f3650.slice/crio-5d4fd8fc55bc7fdd33eb7f978d78ffaa7f7e6f6da6bd6f438acd1c9b3e1992ac WatchSource:0}: Error finding container 5d4fd8fc55bc7fdd33eb7f978d78ffaa7f7e6f6da6bd6f438acd1c9b3e1992ac: Status 404 returned error can't find the container with id 5d4fd8fc55bc7fdd33eb7f978d78ffaa7f7e6f6da6bd6f438acd1c9b3e1992ac Jan 27 07:32:44 crc kubenswrapper[4764]: I0127 07:32:44.371536 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-75gxq" event={"ID":"d2d1dfce-f31e-412a-af93-ad96fa2f3650","Type":"ContainerStarted","Data":"5d4fd8fc55bc7fdd33eb7f978d78ffaa7f7e6f6da6bd6f438acd1c9b3e1992ac"} Jan 27 07:32:44 crc kubenswrapper[4764]: I0127 07:32:44.449548 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1399aa2-374e-4f4c-bd18-43358d7283b6" path="/var/lib/kubelet/pods/e1399aa2-374e-4f4c-bd18-43358d7283b6/volumes" Jan 27 07:32:44 crc kubenswrapper[4764]: I0127 07:32:44.450040 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec45bf1-8fae-4d97-85ac-c01b5b709af9" path="/var/lib/kubelet/pods/fec45bf1-8fae-4d97-85ac-c01b5b709af9/volumes" Jan 27 07:32:50 crc kubenswrapper[4764]: I0127 07:32:50.273716 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:32:50 crc kubenswrapper[4764]: I0127 07:32:50.340392 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.438752 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" event={"ID":"6ab25637-135f-4da6-8afb-6afa49e80ae9","Type":"ContainerStarted","Data":"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f"} Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.439253 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.438918 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="dnsmasq-dns" containerID="cri-o://40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f" gracePeriod=10 Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.459851 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" podStartSLOduration=-9223372007.39495 podStartE2EDuration="29.459824759s" podCreationTimestamp="2026-01-27 07:32:24 +0000 UTC" firstStartedPulling="2026-01-27 07:32:25.274668718 +0000 UTC m=+957.870291244" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:32:53.457021295 +0000 UTC m=+986.052643841" watchObservedRunningTime="2026-01-27 07:32:53.459824759 +0000 UTC m=+986.055447285" Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.763040 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:32:53 crc kubenswrapper[4764]: I0127 07:32:53.763515 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.039930 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.049717 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc\") pod \"6ab25637-135f-4da6-8afb-6afa49e80ae9\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.049777 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghrs5\" (UniqueName: \"kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5\") pod \"6ab25637-135f-4da6-8afb-6afa49e80ae9\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.049807 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config\") pod \"6ab25637-135f-4da6-8afb-6afa49e80ae9\" (UID: \"6ab25637-135f-4da6-8afb-6afa49e80ae9\") " Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.062371 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5" (OuterVolumeSpecName: "kube-api-access-ghrs5") pod "6ab25637-135f-4da6-8afb-6afa49e80ae9" (UID: "6ab25637-135f-4da6-8afb-6afa49e80ae9"). InnerVolumeSpecName "kube-api-access-ghrs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.169209 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghrs5\" (UniqueName: \"kubernetes.io/projected/6ab25637-135f-4da6-8afb-6afa49e80ae9-kube-api-access-ghrs5\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.273895 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ab25637-135f-4da6-8afb-6afa49e80ae9" (UID: "6ab25637-135f-4da6-8afb-6afa49e80ae9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.301976 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config" (OuterVolumeSpecName: "config") pod "6ab25637-135f-4da6-8afb-6afa49e80ae9" (UID: "6ab25637-135f-4da6-8afb-6afa49e80ae9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.371524 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.371943 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab25637-135f-4da6-8afb-6afa49e80ae9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.452951 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1aef55ab-9f36-4eb1-8556-27e2136d1725","Type":"ContainerStarted","Data":"f8b287c9a0fca241956c99cfa056c530f9c88366f1e3f3a73aa4374cd8305fde"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.454794 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ddcf25f7-570e-4d97-9109-9331ba1286a0","Type":"ContainerStarted","Data":"8e7acde8ce0f73caa4627a75ce73a89625cb7be6c7cb7285259623caea757434"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.456347 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerStarted","Data":"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.457533 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerStarted","Data":"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.459121 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"767ab4c4-b54f-448f-af5a-b4d07b433023","Type":"ContainerStarted","Data":"8d4f57365fae9ca645e5ded78cc6153daee2d9525e6f65c05aa2390be35e9626"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.461134 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"76322b36-0480-4d14-8148-67e63de915fe","Type":"ContainerStarted","Data":"afff865eab5405c7296c151528437da377d38b37d49bb95cdacf969830d3883c"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.461322 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.462573 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9689ecb1-cfaf-4f78-aa32-ca09875bfe4f","Type":"ContainerStarted","Data":"ef6745e91f4886addfd402fbaee08187390defefba8f48a402e05e6094db015c"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.462670 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.464791 4764 generic.go:334] "Generic (PLEG): container finished" podID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerID="40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f" exitCode=0 Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.464817 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" event={"ID":"6ab25637-135f-4da6-8afb-6afa49e80ae9","Type":"ContainerDied","Data":"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.464925 4764 scope.go:117] "RemoveContainer" containerID="40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.465078 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" event={"ID":"6ab25637-135f-4da6-8afb-6afa49e80ae9","Type":"ContainerDied","Data":"41a5b68f0f6da5ce3dd1508c1d2ba71b2ab28e325ad08580474f083f347ebf64"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.465355 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vcrbq" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.466714 4764 generic.go:334] "Generic (PLEG): container finished" podID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerID="f10c24890d8f204a0e9a7953d6feeb32088bc02445b51083f5f049da60071d8f" exitCode=0 Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.466769 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerDied","Data":"f10c24890d8f204a0e9a7953d6feeb32088bc02445b51083f5f049da60071d8f"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.472758 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4" event={"ID":"cad0e5a9-459c-4f9b-865b-ddc533316170","Type":"ContainerStarted","Data":"b87c7cab81fcfa8ca93b9357dbafef2688539a4167f6bbdd358e43bc9e924fe1"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.472918 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rw2w4" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.480549 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a","Type":"ContainerStarted","Data":"ce1c5e3884e55a766bb3c0c4adb1863fb0bcfbb2a894f6fe28ad8f02aa578e29"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.491719 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-75gxq" event={"ID":"d2d1dfce-f31e-412a-af93-ad96fa2f3650","Type":"ContainerStarted","Data":"86f6c2ba1010442bc6a183cc570ad4ebd2ff40d49c0b40f60b61a7128520c6d4"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.495248 4764 generic.go:334] "Generic (PLEG): container finished" podID="13cf6436-1dfd-49fa-b548-5c2e5d746e81" containerID="19166862233bc2059afe49bdf0bc38d0abe8d8cc7bd439bd7c96732eb48244f5" exitCode=0 Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.495285 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grw8j" event={"ID":"13cf6436-1dfd-49fa-b548-5c2e5d746e81","Type":"ContainerDied","Data":"19166862233bc2059afe49bdf0bc38d0abe8d8cc7bd439bd7c96732eb48244f5"} Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.507861 4764 scope.go:117] "RemoveContainer" containerID="16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.512913 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.731835094000001 podStartE2EDuration="23.512891107s" podCreationTimestamp="2026-01-27 07:32:31 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.306266111 +0000 UTC m=+974.901888637" lastFinishedPulling="2026-01-27 07:32:53.087322114 +0000 UTC m=+985.682944650" observedRunningTime="2026-01-27 07:32:54.505769148 +0000 UTC m=+987.101391674" watchObservedRunningTime="2026-01-27 07:32:54.512891107 +0000 UTC m=+987.108513633" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.562304 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.752350333 podStartE2EDuration="26.562261804s" podCreationTimestamp="2026-01-27 07:32:28 +0000 UTC" firstStartedPulling="2026-01-27 07:32:35.361465794 +0000 UTC m=+967.957088320" lastFinishedPulling="2026-01-27 07:32:48.171377265 +0000 UTC m=+980.766999791" observedRunningTime="2026-01-27 07:32:54.549077695 +0000 UTC m=+987.144700241" watchObservedRunningTime="2026-01-27 07:32:54.562261804 +0000 UTC m=+987.157884330" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.586879 4764 scope.go:117] "RemoveContainer" containerID="40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f" Jan 27 07:32:54 crc kubenswrapper[4764]: E0127 07:32:54.587567 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f\": container with ID starting with 40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f not found: ID does not exist" containerID="40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.587618 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f"} err="failed to get container status \"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f\": rpc error: code = NotFound desc = could not find container \"40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f\": container with ID starting with 40c94500df0d2f2eaec65827a7050388f5afbf167488947a175681620df20f6f not found: ID does not exist" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.587648 4764 scope.go:117] "RemoveContainer" containerID="16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0" Jan 27 07:32:54 crc kubenswrapper[4764]: E0127 07:32:54.588235 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0\": container with ID starting with 16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0 not found: ID does not exist" containerID="16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.588261 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0"} err="failed to get container status \"16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0\": rpc error: code = NotFound desc = could not find container \"16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0\": container with ID starting with 16018b0ab122e17d6b16c894066df1974cb58a013eabc2e452e3fe0ecba62df0 not found: ID does not exist" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.648720 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rw2w4" podStartSLOduration=10.471790841 podStartE2EDuration="20.648696283s" podCreationTimestamp="2026-01-27 07:32:34 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.752103769 +0000 UTC m=+975.347726295" lastFinishedPulling="2026-01-27 07:32:52.929009211 +0000 UTC m=+985.524631737" observedRunningTime="2026-01-27 07:32:54.647532712 +0000 UTC m=+987.243155258" watchObservedRunningTime="2026-01-27 07:32:54.648696283 +0000 UTC m=+987.244318799" Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.703016 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:54 crc kubenswrapper[4764]: I0127 07:32:54.708066 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vcrbq"] Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.505712 4764 generic.go:334] "Generic (PLEG): container finished" podID="d2d1dfce-f31e-412a-af93-ad96fa2f3650" containerID="86f6c2ba1010442bc6a183cc570ad4ebd2ff40d49c0b40f60b61a7128520c6d4" exitCode=0 Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.506009 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-75gxq" event={"ID":"d2d1dfce-f31e-412a-af93-ad96fa2f3650","Type":"ContainerDied","Data":"86f6c2ba1010442bc6a183cc570ad4ebd2ff40d49c0b40f60b61a7128520c6d4"} Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.509891 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grw8j" event={"ID":"13cf6436-1dfd-49fa-b548-5c2e5d746e81","Type":"ContainerStarted","Data":"8aa81d4e83eeddba128be84658a62ff55d1a52f17cce191710a8f3066419ff66"} Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.514187 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerStarted","Data":"a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf"} Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.544353 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-grw8j" podStartSLOduration=13.194827226 podStartE2EDuration="24.544339423s" podCreationTimestamp="2026-01-27 07:32:31 +0000 UTC" firstStartedPulling="2026-01-27 07:32:43.637238079 +0000 UTC m=+976.232860605" lastFinishedPulling="2026-01-27 07:32:54.986750276 +0000 UTC m=+987.582372802" observedRunningTime="2026-01-27 07:32:55.542379651 +0000 UTC m=+988.138002177" watchObservedRunningTime="2026-01-27 07:32:55.544339423 +0000 UTC m=+988.139961949" Jan 27 07:32:55 crc kubenswrapper[4764]: I0127 07:32:55.565328 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6f4qf" podStartSLOduration=15.304396187 podStartE2EDuration="26.565307978s" podCreationTimestamp="2026-01-27 07:32:29 +0000 UTC" firstStartedPulling="2026-01-27 07:32:43.634935898 +0000 UTC m=+976.230558414" lastFinishedPulling="2026-01-27 07:32:54.895847679 +0000 UTC m=+987.491470205" observedRunningTime="2026-01-27 07:32:55.560037168 +0000 UTC m=+988.155659684" watchObservedRunningTime="2026-01-27 07:32:55.565307978 +0000 UTC m=+988.160930504" Jan 27 07:32:56 crc kubenswrapper[4764]: I0127 07:32:56.447887 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" path="/var/lib/kubelet/pods/6ab25637-135f-4da6-8afb-6afa49e80ae9/volumes" Jan 27 07:32:56 crc kubenswrapper[4764]: I0127 07:32:56.532468 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-75gxq" event={"ID":"d2d1dfce-f31e-412a-af93-ad96fa2f3650","Type":"ContainerStarted","Data":"b2969fabba35212a1812af187cb1037003695f791e17215e2ad5f0fabe9ee96d"} Jan 27 07:32:56 crc kubenswrapper[4764]: I0127 07:32:56.532513 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-75gxq" event={"ID":"d2d1dfce-f31e-412a-af93-ad96fa2f3650","Type":"ContainerStarted","Data":"de66648e5cafbd60e8a08dbf991cd9d661ce915fdfdbd9a36c0d9ce5c6b9b67d"} Jan 27 07:32:57 crc kubenswrapper[4764]: I0127 07:32:57.541150 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:57 crc kubenswrapper[4764]: I0127 07:32:57.541207 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:32:58 crc kubenswrapper[4764]: I0127 07:32:58.548770 4764 generic.go:334] "Generic (PLEG): container finished" podID="ddcf25f7-570e-4d97-9109-9331ba1286a0" containerID="8e7acde8ce0f73caa4627a75ce73a89625cb7be6c7cb7285259623caea757434" exitCode=0 Jan 27 07:32:58 crc kubenswrapper[4764]: I0127 07:32:58.548872 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ddcf25f7-570e-4d97-9109-9331ba1286a0","Type":"ContainerDied","Data":"8e7acde8ce0f73caa4627a75ce73a89625cb7be6c7cb7285259623caea757434"} Jan 27 07:32:58 crc kubenswrapper[4764]: I0127 07:32:58.550583 4764 generic.go:334] "Generic (PLEG): container finished" podID="767ab4c4-b54f-448f-af5a-b4d07b433023" containerID="8d4f57365fae9ca645e5ded78cc6153daee2d9525e6f65c05aa2390be35e9626" exitCode=0 Jan 27 07:32:58 crc kubenswrapper[4764]: I0127 07:32:58.550683 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"767ab4c4-b54f-448f-af5a-b4d07b433023","Type":"ContainerDied","Data":"8d4f57365fae9ca645e5ded78cc6153daee2d9525e6f65c05aa2390be35e9626"} Jan 27 07:32:58 crc kubenswrapper[4764]: I0127 07:32:58.571853 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-75gxq" podStartSLOduration=15.298438673 podStartE2EDuration="24.571825429s" podCreationTimestamp="2026-01-27 07:32:34 +0000 UTC" firstStartedPulling="2026-01-27 07:32:43.64861981 +0000 UTC m=+976.244242336" lastFinishedPulling="2026-01-27 07:32:52.922006566 +0000 UTC m=+985.517629092" observedRunningTime="2026-01-27 07:32:56.554860404 +0000 UTC m=+989.150482930" watchObservedRunningTime="2026-01-27 07:32:58.571825429 +0000 UTC m=+991.167447965" Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.143628 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.560485 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ddcf25f7-570e-4d97-9109-9331ba1286a0","Type":"ContainerStarted","Data":"a2b10fc66cc90fb0597080c81de079b98c0ce3e4caae5491ecc6df6c7b1227e1"} Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.563350 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"767ab4c4-b54f-448f-af5a-b4d07b433023","Type":"ContainerStarted","Data":"b2e8db1982d7af664757b6227156fa526769a31a8f7beddcc2605604812f2be8"} Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.608131 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.150783084 podStartE2EDuration="32.608105713s" podCreationTimestamp="2026-01-27 07:32:27 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.472471253 +0000 UTC m=+975.068093779" lastFinishedPulling="2026-01-27 07:32:52.929793882 +0000 UTC m=+985.525416408" observedRunningTime="2026-01-27 07:32:59.586407448 +0000 UTC m=+992.182029994" watchObservedRunningTime="2026-01-27 07:32:59.608105713 +0000 UTC m=+992.203728239" Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.625100 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.631488349 podStartE2EDuration="33.625071722s" podCreationTimestamp="2026-01-27 07:32:26 +0000 UTC" firstStartedPulling="2026-01-27 07:32:41.706277362 +0000 UTC m=+974.301899898" lastFinishedPulling="2026-01-27 07:32:49.699860745 +0000 UTC m=+982.295483271" observedRunningTime="2026-01-27 07:32:59.611171994 +0000 UTC m=+992.206794520" watchObservedRunningTime="2026-01-27 07:32:59.625071722 +0000 UTC m=+992.220694258" Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.956017 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:32:59 crc kubenswrapper[4764]: I0127 07:32:59.956060 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:33:00 crc kubenswrapper[4764]: I0127 07:33:00.001193 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:33:00 crc kubenswrapper[4764]: I0127 07:33:00.571656 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1aef55ab-9f36-4eb1-8556-27e2136d1725","Type":"ContainerStarted","Data":"4678b11eb3177f51aa95cc5f6ef8eb2366c8dd684a3c63af96f56ddda5f3721e"} Jan 27 07:33:00 crc kubenswrapper[4764]: I0127 07:33:00.593616 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.123083453 podStartE2EDuration="24.593598921s" podCreationTimestamp="2026-01-27 07:32:36 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.756998568 +0000 UTC m=+975.352621094" lastFinishedPulling="2026-01-27 07:33:00.227514036 +0000 UTC m=+992.823136562" observedRunningTime="2026-01-27 07:33:00.592555684 +0000 UTC m=+993.188178210" watchObservedRunningTime="2026-01-27 07:33:00.593598921 +0000 UTC m=+993.189221457" Jan 27 07:33:00 crc kubenswrapper[4764]: I0127 07:33:00.637933 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:33:00 crc kubenswrapper[4764]: I0127 07:33:00.812484 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.452883 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.516201 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:01 crc kubenswrapper[4764]: E0127 07:33:01.516528 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="dnsmasq-dns" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.516547 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="dnsmasq-dns" Jan 27 07:33:01 crc kubenswrapper[4764]: E0127 07:33:01.516589 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="init" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.516597 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="init" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.516740 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab25637-135f-4da6-8afb-6afa49e80ae9" containerName="dnsmasq-dns" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.520524 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.552872 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.586097 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a","Type":"ContainerStarted","Data":"51037d34a6c0989665ebfa8c0bf29b56698fdbf2562de83fa3755b9d7e009b3c"} Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.602319 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.602790 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5c8\" (UniqueName: \"kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.602877 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.611027 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.933016581 podStartE2EDuration="25.611003004s" podCreationTimestamp="2026-01-27 07:32:36 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.845518292 +0000 UTC m=+975.441140818" lastFinishedPulling="2026-01-27 07:33:00.523504705 +0000 UTC m=+993.119127241" observedRunningTime="2026-01-27 07:33:01.607356978 +0000 UTC m=+994.202979504" watchObservedRunningTime="2026-01-27 07:33:01.611003004 +0000 UTC m=+994.206625530" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.704082 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.704227 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5c8\" (UniqueName: \"kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.704333 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.706016 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.706033 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.728389 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5c8\" (UniqueName: \"kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8\") pod \"dnsmasq-dns-7f9f9f545f-8nrbd\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.842577 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:01 crc kubenswrapper[4764]: I0127 07:33:01.995611 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.085141 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.206657 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.268455 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.312606 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.362449 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.362837 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.429493 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.592630 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" event={"ID":"0ad94135-c02a-47b0-a06f-26586de06990","Type":"ContainerStarted","Data":"6dc1253942a53e9ea1090dd016883663e35d53b66b461701878adc6e515d0b14"} Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.592977 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.593261 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.593267 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6f4qf" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="registry-server" containerID="cri-o://a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" gracePeriod=2 Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.637771 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.642852 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.643991 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.644252 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.653875 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.654401 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.655456 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.655460 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-65lz5" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.656421 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.660657 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-grw8j" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736215 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-cache\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736266 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mbx\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-kube-api-access-88mbx\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736304 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-lock\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736372 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736386 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.736472 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f481ed8-7f32-478b-88ce-6caaa3a42074-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838045 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f481ed8-7f32-478b-88ce-6caaa3a42074-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838130 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-cache\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838155 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mbx\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-kube-api-access-88mbx\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838183 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-lock\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838223 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838239 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: E0127 07:33:02.838384 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:02 crc kubenswrapper[4764]: E0127 07:33:02.838397 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:02 crc kubenswrapper[4764]: E0127 07:33:02.838455 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:03.3384248 +0000 UTC m=+995.934047316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.838968 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-lock\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.839233 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.839745 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f481ed8-7f32-478b-88ce-6caaa3a42074-cache\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.843924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f481ed8-7f32-478b-88ce-6caaa3a42074-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.857465 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mbx\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-kube-api-access-88mbx\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.866272 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.878844 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.898742 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.900318 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.902887 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 07:33:02 crc kubenswrapper[4764]: I0127 07:33:02.906574 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.043302 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.043777 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsr9\" (UniqueName: \"kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.043852 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.043886 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.119132 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hl87j"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.120149 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.123479 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.146380 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsr9\" (UniqueName: \"kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.146494 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.146521 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.146590 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.147503 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.147861 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.148048 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.148089 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hl87j"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.174340 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsr9\" (UniqueName: \"kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9\") pod \"dnsmasq-dns-8555945b55-z9rm5\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.178919 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5fdxs"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.180265 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.189794 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.190017 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.190169 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.201808 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grw8j"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.206463 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5fdxs"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.220983 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248514 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovn-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248576 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248594 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248662 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-combined-ca-bundle\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248687 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gp4w\" (UniqueName: \"kubernetes.io/projected/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-kube-api-access-9gp4w\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248708 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248741 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248758 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovs-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248778 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248795 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248817 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248833 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-config\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.248860 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj685\" (UniqueName: \"kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.318660 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.319080 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nj2jh" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="registry-server" containerID="cri-o://3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" gracePeriod=2 Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.339383 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.345664 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.350461 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.352752 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-combined-ca-bundle\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.352843 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gp4w\" (UniqueName: \"kubernetes.io/projected/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-kube-api-access-9gp4w\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.353057 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.353176 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.353320 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.353426 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovs-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.354796 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovs-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.354876 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.354960 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355030 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355068 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-config\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355156 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj685\" (UniqueName: \"kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355215 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovn-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355342 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355368 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.355491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:03 crc kubenswrapper[4764]: E0127 07:33:03.355830 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:03 crc kubenswrapper[4764]: E0127 07:33:03.355856 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:03 crc kubenswrapper[4764]: E0127 07:33:03.355909 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:04.355884594 +0000 UTC m=+996.951507120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.358251 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.358424 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.359725 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-szdmx" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.359803 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-combined-ca-bundle\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.359968 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.360207 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.360279 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-ovn-rundir\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.362839 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.364070 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.364920 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.366835 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.373498 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.374919 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-config\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.378545 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.381680 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.383012 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj685\" (UniqueName: \"kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.384073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gp4w\" (UniqueName: \"kubernetes.io/projected/3fc8ca71-7042-4f29-8c78-d7f1974c55c2-kube-api-access-9gp4w\") pod \"ovn-controller-metrics-hl87j\" (UID: \"3fc8ca71-7042-4f29-8c78-d7f1974c55c2\") " pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.385737 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.389544 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.391988 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf\") pod \"swift-ring-rebalance-5fdxs\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.461284 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hl87j" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462151 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462197 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-config\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462225 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462247 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462281 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-scripts\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462306 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvsc\" (UniqueName: \"kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462330 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462344 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462360 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8rl\" (UniqueName: \"kubernetes.io/projected/febffc0b-3eb0-4183-993e-e12bb3e11744-kube-api-access-vn8rl\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462383 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462402 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.462417 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.544381 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564090 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564151 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-config\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564228 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564250 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564297 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-scripts\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564348 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvsc\" (UniqueName: \"kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564374 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564389 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564406 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8rl\" (UniqueName: \"kubernetes.io/projected/febffc0b-3eb0-4183-993e-e12bb3e11744-kube-api-access-vn8rl\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564427 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564464 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.564479 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.568691 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.569342 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-scripts\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.569397 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febffc0b-3eb0-4183-993e-e12bb3e11744-config\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.570639 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.572032 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.575065 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.576383 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.577638 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.580267 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.585565 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febffc0b-3eb0-4183-993e-e12bb3e11744-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.590231 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvsc\" (UniqueName: \"kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc\") pod \"dnsmasq-dns-6cb545bd4c-gp6p9\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.597062 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8rl\" (UniqueName: \"kubernetes.io/projected/febffc0b-3eb0-4183-993e-e12bb3e11744-kube-api-access-vn8rl\") pod \"ovn-northd-0\" (UID: \"febffc0b-3eb0-4183-993e-e12bb3e11744\") " pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.789898 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.812505 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hl87j"] Jan 27 07:33:03 crc kubenswrapper[4764]: W0127 07:33:03.841065 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc8ca71_7042_4f29_8c78_d7f1974c55c2.slice/crio-e7ce2d426e01fd9d0a804701c09f9c04b8511d4cb9f9d181de3f680179affe0d WatchSource:0}: Error finding container e7ce2d426e01fd9d0a804701c09f9c04b8511d4cb9f9d181de3f680179affe0d: Status 404 returned error can't find the container with id e7ce2d426e01fd9d0a804701c09f9c04b8511d4cb9f9d181de3f680179affe0d Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.859956 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:03 crc kubenswrapper[4764]: I0127 07:33:03.890751 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:03 crc kubenswrapper[4764]: W0127 07:33:03.920757 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392ea21f_523b_49dc_928e_e471151d4b8d.slice/crio-5f535e24952671490a14e82a3003dee76561a7768fd9df98c248e4e863d10269 WatchSource:0}: Error finding container 5f535e24952671490a14e82a3003dee76561a7768fd9df98c248e4e863d10269: Status 404 returned error can't find the container with id 5f535e24952671490a14e82a3003dee76561a7768fd9df98c248e4e863d10269 Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.159023 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5fdxs"] Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.368642 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.401270 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:04 crc kubenswrapper[4764]: E0127 07:33:04.401550 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:04 crc kubenswrapper[4764]: E0127 07:33:04.401565 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:04 crc kubenswrapper[4764]: E0127 07:33:04.401618 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:06.401605008 +0000 UTC m=+998.997227534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.598802 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.621094 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" event={"ID":"392ea21f-523b-49dc-928e-e471151d4b8d","Type":"ContainerStarted","Data":"5f535e24952671490a14e82a3003dee76561a7768fd9df98c248e4e863d10269"} Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.628757 4764 generic.go:334] "Generic (PLEG): container finished" podID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerID="a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" exitCode=0 Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.628810 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerDied","Data":"a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf"} Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.630215 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hl87j" event={"ID":"3fc8ca71-7042-4f29-8c78-d7f1974c55c2","Type":"ContainerStarted","Data":"e7ce2d426e01fd9d0a804701c09f9c04b8511d4cb9f9d181de3f680179affe0d"} Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.635777 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fdxs" event={"ID":"5bc23877-3f5d-40bd-a1ee-4589e777beec","Type":"ContainerStarted","Data":"987ec83855c1f0074b4d5e8155b93321327f4f0e235e0871746e5102b1c66e6d"} Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.640013 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"febffc0b-3eb0-4183-993e-e12bb3e11744","Type":"ContainerStarted","Data":"9a03adcf37b6f3a8b20fc58f36036882f2a6c9647fb68479b650d993f9b0a540"} Jan 27 07:33:04 crc kubenswrapper[4764]: I0127 07:33:04.642381 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" event={"ID":"4799479c-9f58-454a-be5a-1c00be138aa4","Type":"ContainerStarted","Data":"c09dcc592167e4a95755f612bd01a486efd4a13018175b6d2a2a5f3514667e36"} Jan 27 07:33:05 crc kubenswrapper[4764]: I0127 07:33:05.651160 4764 generic.go:334] "Generic (PLEG): container finished" podID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerID="3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" exitCode=0 Jan 27 07:33:05 crc kubenswrapper[4764]: I0127 07:33:05.651217 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerDied","Data":"3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb"} Jan 27 07:33:06 crc kubenswrapper[4764]: I0127 07:33:06.435480 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.435660 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.435816 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.435862 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:10.43584761 +0000 UTC m=+1003.031470136 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.741530 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb is running failed: container process not found" containerID="3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.742029 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb is running failed: container process not found" containerID="3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.742314 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb is running failed: container process not found" containerID="3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:06 crc kubenswrapper[4764]: E0127 07:33:06.742344 4764 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-nj2jh" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="registry-server" Jan 27 07:33:07 crc kubenswrapper[4764]: I0127 07:33:07.479826 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 07:33:07 crc kubenswrapper[4764]: I0127 07:33:07.479877 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 07:33:08 crc kubenswrapper[4764]: I0127 07:33:08.836817 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 07:33:08 crc kubenswrapper[4764]: I0127 07:33:08.837125 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 07:33:09 crc kubenswrapper[4764]: E0127 07:33:09.957708 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf is running failed: container process not found" containerID="a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:09 crc kubenswrapper[4764]: E0127 07:33:09.959719 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf is running failed: container process not found" containerID="a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:09 crc kubenswrapper[4764]: E0127 07:33:09.960742 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf is running failed: container process not found" containerID="a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 07:33:09 crc kubenswrapper[4764]: E0127 07:33:09.960849 4764 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6f4qf" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="registry-server" Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.514855 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:10 crc kubenswrapper[4764]: E0127 07:33:10.515029 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:10 crc kubenswrapper[4764]: E0127 07:33:10.515319 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:10 crc kubenswrapper[4764]: E0127 07:33:10.515367 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:18.515350047 +0000 UTC m=+1011.110972573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.701813 4764 generic.go:334] "Generic (PLEG): container finished" podID="0ad94135-c02a-47b0-a06f-26586de06990" containerID="f9325348494d74e84a22ce620275d87fd01d29837c5d930972e194f4eaddb631" exitCode=0 Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.701933 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" event={"ID":"0ad94135-c02a-47b0-a06f-26586de06990","Type":"ContainerDied","Data":"f9325348494d74e84a22ce620275d87fd01d29837c5d930972e194f4eaddb631"} Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.705270 4764 generic.go:334] "Generic (PLEG): container finished" podID="4799479c-9f58-454a-be5a-1c00be138aa4" containerID="fff23981e3e4c86d1b53aaf9af75598754565dedc7fc72bffa7fe2848d12c91d" exitCode=0 Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.705338 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" event={"ID":"4799479c-9f58-454a-be5a-1c00be138aa4","Type":"ContainerDied","Data":"fff23981e3e4c86d1b53aaf9af75598754565dedc7fc72bffa7fe2848d12c91d"} Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.709782 4764 generic.go:334] "Generic (PLEG): container finished" podID="392ea21f-523b-49dc-928e-e471151d4b8d" containerID="f02825a49c94d4d079c1c2df0fd63a87200dc281536bcb8993b9509d6ba78ae6" exitCode=0 Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.709865 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" event={"ID":"392ea21f-523b-49dc-928e-e471151d4b8d","Type":"ContainerDied","Data":"f02825a49c94d4d079c1c2df0fd63a87200dc281536bcb8993b9509d6ba78ae6"} Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.713001 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hl87j" event={"ID":"3fc8ca71-7042-4f29-8c78-d7f1974c55c2","Type":"ContainerStarted","Data":"91e0f2f543304baa27999705804f7befd7c0cebff7a41b45c96faef82547ba6b"} Jan 27 07:33:10 crc kubenswrapper[4764]: I0127 07:33:10.757211 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hl87j" podStartSLOduration=7.757188281 podStartE2EDuration="7.757188281s" podCreationTimestamp="2026-01-27 07:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:10.738900807 +0000 UTC m=+1003.334523353" watchObservedRunningTime="2026-01-27 07:33:10.757188281 +0000 UTC m=+1003.352810807" Jan 27 07:33:11 crc kubenswrapper[4764]: I0127 07:33:11.974699 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.152266 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb\") pod \"392ea21f-523b-49dc-928e-e471151d4b8d\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.152379 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc\") pod \"392ea21f-523b-49dc-928e-e471151d4b8d\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.152413 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config\") pod \"392ea21f-523b-49dc-928e-e471151d4b8d\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.152561 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqsr9\" (UniqueName: \"kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9\") pod \"392ea21f-523b-49dc-928e-e471151d4b8d\" (UID: \"392ea21f-523b-49dc-928e-e471151d4b8d\") " Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.168656 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9" (OuterVolumeSpecName: "kube-api-access-vqsr9") pod "392ea21f-523b-49dc-928e-e471151d4b8d" (UID: "392ea21f-523b-49dc-928e-e471151d4b8d"). InnerVolumeSpecName "kube-api-access-vqsr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.172014 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config" (OuterVolumeSpecName: "config") pod "392ea21f-523b-49dc-928e-e471151d4b8d" (UID: "392ea21f-523b-49dc-928e-e471151d4b8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.172050 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "392ea21f-523b-49dc-928e-e471151d4b8d" (UID: "392ea21f-523b-49dc-928e-e471151d4b8d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.181560 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "392ea21f-523b-49dc-928e-e471151d4b8d" (UID: "392ea21f-523b-49dc-928e-e471151d4b8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.254487 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.254527 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.254542 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392ea21f-523b-49dc-928e-e471151d4b8d-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.254554 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqsr9\" (UniqueName: \"kubernetes.io/projected/392ea21f-523b-49dc-928e-e471151d4b8d-kube-api-access-vqsr9\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.747488 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" event={"ID":"392ea21f-523b-49dc-928e-e471151d4b8d","Type":"ContainerDied","Data":"5f535e24952671490a14e82a3003dee76561a7768fd9df98c248e4e863d10269"} Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.747550 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-z9rm5" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.747961 4764 scope.go:117] "RemoveContainer" containerID="f02825a49c94d4d079c1c2df0fd63a87200dc281536bcb8993b9509d6ba78ae6" Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.804759 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:12 crc kubenswrapper[4764]: I0127 07:33:12.809828 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-z9rm5"] Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.016346 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.039608 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071196 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52zt8\" (UniqueName: \"kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8\") pod \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071338 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxg66\" (UniqueName: \"kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66\") pod \"f115191a-acf3-4ca6-a263-f5e155e355bb\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071389 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities\") pod \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071499 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content\") pod \"f115191a-acf3-4ca6-a263-f5e155e355bb\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071662 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities\") pod \"f115191a-acf3-4ca6-a263-f5e155e355bb\" (UID: \"f115191a-acf3-4ca6-a263-f5e155e355bb\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.071681 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content\") pod \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\" (UID: \"7957b4f6-f6e6-4655-8001-6f2c05c995bd\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.072853 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities" (OuterVolumeSpecName: "utilities") pod "f115191a-acf3-4ca6-a263-f5e155e355bb" (UID: "f115191a-acf3-4ca6-a263-f5e155e355bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.073613 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities" (OuterVolumeSpecName: "utilities") pod "7957b4f6-f6e6-4655-8001-6f2c05c995bd" (UID: "7957b4f6-f6e6-4655-8001-6f2c05c995bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.077593 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66" (OuterVolumeSpecName: "kube-api-access-qxg66") pod "f115191a-acf3-4ca6-a263-f5e155e355bb" (UID: "f115191a-acf3-4ca6-a263-f5e155e355bb"). InnerVolumeSpecName "kube-api-access-qxg66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.078907 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8" (OuterVolumeSpecName: "kube-api-access-52zt8") pod "7957b4f6-f6e6-4655-8001-6f2c05c995bd" (UID: "7957b4f6-f6e6-4655-8001-6f2c05c995bd"). InnerVolumeSpecName "kube-api-access-52zt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.138125 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7957b4f6-f6e6-4655-8001-6f2c05c995bd" (UID: "7957b4f6-f6e6-4655-8001-6f2c05c995bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.147893 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f115191a-acf3-4ca6-a263-f5e155e355bb" (UID: "f115191a-acf3-4ca6-a263-f5e155e355bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.160748 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185336 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52zt8\" (UniqueName: \"kubernetes.io/projected/7957b4f6-f6e6-4655-8001-6f2c05c995bd-kube-api-access-52zt8\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185380 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxg66\" (UniqueName: \"kubernetes.io/projected/f115191a-acf3-4ca6-a263-f5e155e355bb-kube-api-access-qxg66\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185397 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185414 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185457 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f115191a-acf3-4ca6-a263-f5e155e355bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.185505 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7957b4f6-f6e6-4655-8001-6f2c05c995bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.241607 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.431303 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.593021 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc\") pod \"0ad94135-c02a-47b0-a06f-26586de06990\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.593394 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config\") pod \"0ad94135-c02a-47b0-a06f-26586de06990\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.593472 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq5c8\" (UniqueName: \"kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8\") pod \"0ad94135-c02a-47b0-a06f-26586de06990\" (UID: \"0ad94135-c02a-47b0-a06f-26586de06990\") " Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.600507 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8" (OuterVolumeSpecName: "kube-api-access-sq5c8") pod "0ad94135-c02a-47b0-a06f-26586de06990" (UID: "0ad94135-c02a-47b0-a06f-26586de06990"). InnerVolumeSpecName "kube-api-access-sq5c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.626520 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ad94135-c02a-47b0-a06f-26586de06990" (UID: "0ad94135-c02a-47b0-a06f-26586de06990"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.627598 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config" (OuterVolumeSpecName: "config") pod "0ad94135-c02a-47b0-a06f-26586de06990" (UID: "0ad94135-c02a-47b0-a06f-26586de06990"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.696808 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.696856 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq5c8\" (UniqueName: \"kubernetes.io/projected/0ad94135-c02a-47b0-a06f-26586de06990-kube-api-access-sq5c8\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.696874 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad94135-c02a-47b0-a06f-26586de06990-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.777690 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj2jh" event={"ID":"f115191a-acf3-4ca6-a263-f5e155e355bb","Type":"ContainerDied","Data":"c80edd556adfc17f11ff65eb3d54b60e54082e987f1d3ae40990adedae894cdc"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.777890 4764 scope.go:117] "RemoveContainer" containerID="3a2b60bcb6d21b900c9695ea0d18ae48eee161781e97abae6e158d99cd77aceb" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.778276 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj2jh" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.806706 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f4qf" event={"ID":"7957b4f6-f6e6-4655-8001-6f2c05c995bd","Type":"ContainerDied","Data":"6ab31300522b8510f3d8b756b38c9e40d768b6ff7de95811a61c2bcabeb39bf6"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.807176 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f4qf" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.812149 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fdxs" event={"ID":"5bc23877-3f5d-40bd-a1ee-4589e777beec","Type":"ContainerStarted","Data":"5eeec7a269ddad12e09fee4c1c2c9f50ce40701d2576454094444e60baeb145a"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.828447 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"febffc0b-3eb0-4183-993e-e12bb3e11744","Type":"ContainerStarted","Data":"9c118b23781f59e739a7c28592f3460a3d80b4582efddc2676ccfcad7fbc5702"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.830527 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" event={"ID":"0ad94135-c02a-47b0-a06f-26586de06990","Type":"ContainerDied","Data":"6dc1253942a53e9ea1090dd016883663e35d53b66b461701878adc6e515d0b14"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.830565 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-8nrbd" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.845208 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5fdxs" podStartSLOduration=1.6404786040000001 podStartE2EDuration="10.84519241s" podCreationTimestamp="2026-01-27 07:33:03 +0000 UTC" firstStartedPulling="2026-01-27 07:33:04.197902533 +0000 UTC m=+996.793525049" lastFinishedPulling="2026-01-27 07:33:13.402616329 +0000 UTC m=+1005.998238855" observedRunningTime="2026-01-27 07:33:13.831347234 +0000 UTC m=+1006.426969760" watchObservedRunningTime="2026-01-27 07:33:13.84519241 +0000 UTC m=+1006.440814936" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.847416 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" event={"ID":"4799479c-9f58-454a-be5a-1c00be138aa4","Type":"ContainerStarted","Data":"e2a1bfed2ec6463d5f9309e543437ff3ec41f327d94fe55e855476d16d032c41"} Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.852824 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.865124 4764 scope.go:117] "RemoveContainer" containerID="0127bbda7c421f24fab2a703804cec087a3353913a01165d279dc8682de983e9" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.897635 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" podStartSLOduration=10.897609288 podStartE2EDuration="10.897609288s" podCreationTimestamp="2026-01-27 07:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:13.87086531 +0000 UTC m=+1006.466487866" watchObservedRunningTime="2026-01-27 07:33:13.897609288 +0000 UTC m=+1006.493231814" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.908939 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.920696 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nj2jh"] Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.922271 4764 scope.go:117] "RemoveContainer" containerID="039c3619a33dfb800bd42cf6ea5026013e17c37771a019d991ee57dff9c4e900" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.927785 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.974349 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6f4qf"] Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.980745 4764 scope.go:117] "RemoveContainer" containerID="a1aa3f98959e31be39e257083cc6fe8c0674ee968a9d8f7801b8d3568b26ebcf" Jan 27 07:33:13 crc kubenswrapper[4764]: I0127 07:33:13.999684 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.008738 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-8nrbd"] Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.049621 4764 scope.go:117] "RemoveContainer" containerID="f10c24890d8f204a0e9a7953d6feeb32088bc02445b51083f5f049da60071d8f" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.076565 4764 scope.go:117] "RemoveContainer" containerID="de8c988b30ed60aa6ae07e7ab6662bcfe7f0b60983f25f289d4eae3d2614f3ac" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.115779 4764 scope.go:117] "RemoveContainer" containerID="f9325348494d74e84a22ce620275d87fd01d29837c5d930972e194f4eaddb631" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.449366 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad94135-c02a-47b0-a06f-26586de06990" path="/var/lib/kubelet/pods/0ad94135-c02a-47b0-a06f-26586de06990/volumes" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.450048 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392ea21f-523b-49dc-928e-e471151d4b8d" path="/var/lib/kubelet/pods/392ea21f-523b-49dc-928e-e471151d4b8d/volumes" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.450520 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" path="/var/lib/kubelet/pods/7957b4f6-f6e6-4655-8001-6f2c05c995bd/volumes" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.451636 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" path="/var/lib/kubelet/pods/f115191a-acf3-4ca6-a263-f5e155e355bb/volumes" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.857388 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"febffc0b-3eb0-4183-993e-e12bb3e11744","Type":"ContainerStarted","Data":"f5dfeb75c6d93bd8a61fbffc5c7098e99c1cfb3376cc120aad5950c612dcc09d"} Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.857527 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 07:33:14 crc kubenswrapper[4764]: I0127 07:33:14.885256 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.871324091 podStartE2EDuration="11.885231674s" podCreationTimestamp="2026-01-27 07:33:03 +0000 UTC" firstStartedPulling="2026-01-27 07:33:04.384851444 +0000 UTC m=+996.980473970" lastFinishedPulling="2026-01-27 07:33:13.398759027 +0000 UTC m=+1005.994381553" observedRunningTime="2026-01-27 07:33:14.873810371 +0000 UTC m=+1007.469432897" watchObservedRunningTime="2026-01-27 07:33:14.885231674 +0000 UTC m=+1007.480854200" Jan 27 07:33:15 crc kubenswrapper[4764]: I0127 07:33:15.596216 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 07:33:15 crc kubenswrapper[4764]: I0127 07:33:15.671069 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.210847 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9rh4r"] Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211255 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="extract-content" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211278 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="extract-content" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211319 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211330 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211362 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="extract-utilities" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211371 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="extract-utilities" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211387 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad94135-c02a-47b0-a06f-26586de06990" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211395 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad94135-c02a-47b0-a06f-26586de06990" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211405 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211413 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211426 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392ea21f-523b-49dc-928e-e471151d4b8d" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211434 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="392ea21f-523b-49dc-928e-e471151d4b8d" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211526 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="extract-content" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211534 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="extract-content" Jan 27 07:33:16 crc kubenswrapper[4764]: E0127 07:33:16.211546 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="extract-utilities" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211555 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="extract-utilities" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211753 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad94135-c02a-47b0-a06f-26586de06990" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211771 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f115191a-acf3-4ca6-a263-f5e155e355bb" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211782 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="392ea21f-523b-49dc-928e-e471151d4b8d" containerName="init" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.211797 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="7957b4f6-f6e6-4655-8001-6f2c05c995bd" containerName="registry-server" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.212566 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.214949 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.219176 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9rh4r"] Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.343805 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwk98\" (UniqueName: \"kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.343939 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.446059 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwk98\" (UniqueName: \"kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.446145 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.447583 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.473746 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwk98\" (UniqueName: \"kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98\") pod \"root-account-create-update-9rh4r\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.529349 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:16 crc kubenswrapper[4764]: I0127 07:33:16.956781 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9rh4r"] Jan 27 07:33:16 crc kubenswrapper[4764]: W0127 07:33:16.958977 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b7ec40_c80d_4e38_9f86_e5494164b1c2.slice/crio-89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904 WatchSource:0}: Error finding container 89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904: Status 404 returned error can't find the container with id 89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904 Jan 27 07:33:17 crc kubenswrapper[4764]: I0127 07:33:17.882294 4764 generic.go:334] "Generic (PLEG): container finished" podID="33b7ec40-c80d-4e38-9f86-e5494164b1c2" containerID="9ccf0fb8f0b787b29f645f649929b3330a65277eaa21306d7909fad4ba59d538" exitCode=0 Jan 27 07:33:17 crc kubenswrapper[4764]: I0127 07:33:17.882374 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9rh4r" event={"ID":"33b7ec40-c80d-4e38-9f86-e5494164b1c2","Type":"ContainerDied","Data":"9ccf0fb8f0b787b29f645f649929b3330a65277eaa21306d7909fad4ba59d538"} Jan 27 07:33:17 crc kubenswrapper[4764]: I0127 07:33:17.882690 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9rh4r" event={"ID":"33b7ec40-c80d-4e38-9f86-e5494164b1c2","Type":"ContainerStarted","Data":"89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904"} Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.578523 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:18 crc kubenswrapper[4764]: E0127 07:33:18.578755 4764 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 07:33:18 crc kubenswrapper[4764]: E0127 07:33:18.578785 4764 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 07:33:18 crc kubenswrapper[4764]: E0127 07:33:18.578839 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift podName:3f481ed8-7f32-478b-88ce-6caaa3a42074 nodeName:}" failed. No retries permitted until 2026-01-27 07:33:34.5788221 +0000 UTC m=+1027.174444626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift") pod "swift-storage-0" (UID: "3f481ed8-7f32-478b-88ce-6caaa3a42074") : configmap "swift-ring-files" not found Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.713472 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nbzj6"] Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.714469 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.727708 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbzj6"] Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.777707 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9a77-account-create-update-ql2vm"] Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.778655 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.780163 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.785948 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6ml\" (UniqueName: \"kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.786018 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.805546 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9a77-account-create-update-ql2vm"] Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.879606 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.888302 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl6ml\" (UniqueName: \"kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.888384 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.888412 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qjl\" (UniqueName: \"kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.888431 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.889536 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.960621 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl6ml\" (UniqueName: \"kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml\") pod \"keystone-db-create-nbzj6\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.978252 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.978489 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="dnsmasq-dns" containerID="cri-o://95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07" gracePeriod=10 Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.990454 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qjl\" (UniqueName: \"kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.990491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:18 crc kubenswrapper[4764]: I0127 07:33:18.991166 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.018519 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qjl\" (UniqueName: \"kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl\") pod \"keystone-9a77-account-create-update-ql2vm\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.034575 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.087329 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6dltf"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.088651 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.097143 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6dltf"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.103592 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.198083 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2c29\" (UniqueName: \"kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.198252 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.213563 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fe0-account-create-update-xjp2h"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.214861 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.220474 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.224390 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fe0-account-create-update-xjp2h"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.288480 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.299769 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.299874 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchq5\" (UniqueName: \"kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.299932 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.299975 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2c29\" (UniqueName: \"kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.301005 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.321924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2c29\" (UniqueName: \"kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29\") pod \"placement-db-create-6dltf\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.403150 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwk98\" (UniqueName: \"kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98\") pod \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.403490 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts\") pod \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\" (UID: \"33b7ec40-c80d-4e38-9f86-e5494164b1c2\") " Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.403777 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.403922 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchq5\" (UniqueName: \"kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.404778 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.404133 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33b7ec40-c80d-4e38-9f86-e5494164b1c2" (UID: "33b7ec40-c80d-4e38-9f86-e5494164b1c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.406886 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98" (OuterVolumeSpecName: "kube-api-access-zwk98") pod "33b7ec40-c80d-4e38-9f86-e5494164b1c2" (UID: "33b7ec40-c80d-4e38-9f86-e5494164b1c2"). InnerVolumeSpecName "kube-api-access-zwk98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.419676 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchq5\" (UniqueName: \"kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5\") pod \"placement-7fe0-account-create-update-xjp2h\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.446404 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6dltf" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.506023 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwk98\" (UniqueName: \"kubernetes.io/projected/33b7ec40-c80d-4e38-9f86-e5494164b1c2-kube-api-access-zwk98\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.506059 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b7ec40-c80d-4e38-9f86-e5494164b1c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.515534 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mc8vn"] Jan 27 07:33:19 crc kubenswrapper[4764]: E0127 07:33:19.515881 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b7ec40-c80d-4e38-9f86-e5494164b1c2" containerName="mariadb-account-create-update" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.515900 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b7ec40-c80d-4e38-9f86-e5494164b1c2" containerName="mariadb-account-create-update" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.516921 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b7ec40-c80d-4e38-9f86-e5494164b1c2" containerName="mariadb-account-create-update" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.517481 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.539655 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-be4a-account-create-update-kqxnz"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.540790 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.542996 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.551284 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.561532 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mc8vn"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.572500 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-be4a-account-create-update-kqxnz"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.599095 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.608153 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqg8\" (UniqueName: \"kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.608292 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.608332 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.608365 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbrh\" (UniqueName: \"kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.641023 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbzj6"] Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.709729 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config\") pod \"f35324f3-25dd-4b25-8932-1d02eddcdd15\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710249 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t852\" (UniqueName: \"kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852\") pod \"f35324f3-25dd-4b25-8932-1d02eddcdd15\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710282 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc\") pod \"f35324f3-25dd-4b25-8932-1d02eddcdd15\" (UID: \"f35324f3-25dd-4b25-8932-1d02eddcdd15\") " Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710570 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710598 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710640 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbrh\" (UniqueName: \"kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.710713 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvqg8\" (UniqueName: \"kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.712161 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.712970 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.719308 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852" (OuterVolumeSpecName: "kube-api-access-5t852") pod "f35324f3-25dd-4b25-8932-1d02eddcdd15" (UID: "f35324f3-25dd-4b25-8932-1d02eddcdd15"). InnerVolumeSpecName "kube-api-access-5t852". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.731279 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvqg8\" (UniqueName: \"kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8\") pod \"glance-db-create-mc8vn\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.735348 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbrh\" (UniqueName: \"kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh\") pod \"glance-be4a-account-create-update-kqxnz\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.763866 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f35324f3-25dd-4b25-8932-1d02eddcdd15" (UID: "f35324f3-25dd-4b25-8932-1d02eddcdd15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.764012 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config" (OuterVolumeSpecName: "config") pod "f35324f3-25dd-4b25-8932-1d02eddcdd15" (UID: "f35324f3-25dd-4b25-8932-1d02eddcdd15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.804369 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9a77-account-create-update-ql2vm"] Jan 27 07:33:19 crc kubenswrapper[4764]: W0127 07:33:19.809387 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ea70529_f5f1_4fce_b115_4f3274e995a5.slice/crio-81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be WatchSource:0}: Error finding container 81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be: Status 404 returned error can't find the container with id 81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.812476 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.812509 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t852\" (UniqueName: \"kubernetes.io/projected/f35324f3-25dd-4b25-8932-1d02eddcdd15-kube-api-access-5t852\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.812520 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f35324f3-25dd-4b25-8932-1d02eddcdd15-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.842937 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.868015 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.931919 4764 generic.go:334] "Generic (PLEG): container finished" podID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerID="95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07" exitCode=0 Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.931991 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.931990 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" event={"ID":"f35324f3-25dd-4b25-8932-1d02eddcdd15","Type":"ContainerDied","Data":"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.932396 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnwl6" event={"ID":"f35324f3-25dd-4b25-8932-1d02eddcdd15","Type":"ContainerDied","Data":"32f397f354b20df456afa2c02b67492f85d27b2379bcc68191e586ca0ee1eeea"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.932428 4764 scope.go:117] "RemoveContainer" containerID="95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.940546 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbzj6" event={"ID":"32d04865-44ab-4be1-be5d-01347984e03f","Type":"ContainerStarted","Data":"ae23932d8de22d9b3b0168b1ec7a28e2a87a1de58419ab9adc728780282e9581"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.940604 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbzj6" event={"ID":"32d04865-44ab-4be1-be5d-01347984e03f","Type":"ContainerStarted","Data":"d9330185db84e414e0ad919fc441f0f47e66b677bcf737e0ef265f01e5df82ae"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.941667 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9a77-account-create-update-ql2vm" event={"ID":"9ea70529-f5f1-4fce-b115-4f3274e995a5","Type":"ContainerStarted","Data":"81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.944964 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9rh4r" event={"ID":"33b7ec40-c80d-4e38-9f86-e5494164b1c2","Type":"ContainerDied","Data":"89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904"} Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.945017 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89db978de8b3fb1473ac1e13e52f5b31b7cd17a844a8ce19dcb518a6ab317904" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.945048 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9rh4r" Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.945282 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6dltf"] Jan 27 07:33:19 crc kubenswrapper[4764]: W0127 07:33:19.947494 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod720c25d4_64d8_44fe_8d52_fa50f8fc3b2d.slice/crio-fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530 WatchSource:0}: Error finding container fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530: Status 404 returned error can't find the container with id fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530 Jan 27 07:33:19 crc kubenswrapper[4764]: I0127 07:33:19.980198 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-nbzj6" podStartSLOduration=1.980182112 podStartE2EDuration="1.980182112s" podCreationTimestamp="2026-01-27 07:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:19.978701202 +0000 UTC m=+1012.574323728" watchObservedRunningTime="2026-01-27 07:33:19.980182112 +0000 UTC m=+1012.575804628" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.076610 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fe0-account-create-update-xjp2h"] Jan 27 07:33:20 crc kubenswrapper[4764]: W0127 07:33:20.145334 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5c96778_194f_4ad0_bd79_5a60e60f70f6.slice/crio-273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a WatchSource:0}: Error finding container 273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a: Status 404 returned error can't find the container with id 273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.199927 4764 scope.go:117] "RemoveContainer" containerID="4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.288538 4764 scope.go:117] "RemoveContainer" containerID="95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07" Jan 27 07:33:20 crc kubenswrapper[4764]: E0127 07:33:20.291935 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07\": container with ID starting with 95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07 not found: ID does not exist" containerID="95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.291976 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07"} err="failed to get container status \"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07\": rpc error: code = NotFound desc = could not find container \"95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07\": container with ID starting with 95a8f68cf440c7737d73e8f637e16cc5a967fe185564b8ac278f0b71031cbb07 not found: ID does not exist" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.292004 4764 scope.go:117] "RemoveContainer" containerID="4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f" Jan 27 07:33:20 crc kubenswrapper[4764]: E0127 07:33:20.292266 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f\": container with ID starting with 4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f not found: ID does not exist" containerID="4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.292288 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f"} err="failed to get container status \"4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f\": rpc error: code = NotFound desc = could not find container \"4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f\": container with ID starting with 4fa433e48007b72d7ed1d56bab923c00471779426a25631629be309028dfaf3f not found: ID does not exist" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.296405 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.302743 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnwl6"] Jan 27 07:33:20 crc kubenswrapper[4764]: W0127 07:33:20.386059 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d71a0a7_8c6a_4a81_81f9_be5d0ce20faf.slice/crio-5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75 WatchSource:0}: Error finding container 5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75: Status 404 returned error can't find the container with id 5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.388602 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mc8vn"] Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.454477 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" path="/var/lib/kubelet/pods/f35324f3-25dd-4b25-8932-1d02eddcdd15/volumes" Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.485133 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-be4a-account-create-update-kqxnz"] Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.954398 4764 generic.go:334] "Generic (PLEG): container finished" podID="720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" containerID="6302e6e7cfa39fdff1499828afbd77a948fc011a4af517d28252f930049ce663" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.954466 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6dltf" event={"ID":"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d","Type":"ContainerDied","Data":"6302e6e7cfa39fdff1499828afbd77a948fc011a4af517d28252f930049ce663"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.954514 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6dltf" event={"ID":"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d","Type":"ContainerStarted","Data":"fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.956146 4764 generic.go:334] "Generic (PLEG): container finished" podID="32d04865-44ab-4be1-be5d-01347984e03f" containerID="ae23932d8de22d9b3b0168b1ec7a28e2a87a1de58419ab9adc728780282e9581" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.956243 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbzj6" event={"ID":"32d04865-44ab-4be1-be5d-01347984e03f","Type":"ContainerDied","Data":"ae23932d8de22d9b3b0168b1ec7a28e2a87a1de58419ab9adc728780282e9581"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.957854 4764 generic.go:334] "Generic (PLEG): container finished" podID="5bc23877-3f5d-40bd-a1ee-4589e777beec" containerID="5eeec7a269ddad12e09fee4c1c2c9f50ce40701d2576454094444e60baeb145a" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.957915 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fdxs" event={"ID":"5bc23877-3f5d-40bd-a1ee-4589e777beec","Type":"ContainerDied","Data":"5eeec7a269ddad12e09fee4c1c2c9f50ce40701d2576454094444e60baeb145a"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.959488 4764 generic.go:334] "Generic (PLEG): container finished" podID="9ea70529-f5f1-4fce-b115-4f3274e995a5" containerID="6184b5a48b3a7306f15bfeebc9a9d2b5669c15662b11a46c1492e740596778ba" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.959543 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9a77-account-create-update-ql2vm" event={"ID":"9ea70529-f5f1-4fce-b115-4f3274e995a5","Type":"ContainerDied","Data":"6184b5a48b3a7306f15bfeebc9a9d2b5669c15662b11a46c1492e740596778ba"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.960996 4764 generic.go:334] "Generic (PLEG): container finished" podID="c1a48640-b2e1-4f03-8579-e4983204deb9" containerID="29354f239bda28fff6f9e9d68056fa04510a540ebc6ebe9da3836911214ce4c1" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.961062 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be4a-account-create-update-kqxnz" event={"ID":"c1a48640-b2e1-4f03-8579-e4983204deb9","Type":"ContainerDied","Data":"29354f239bda28fff6f9e9d68056fa04510a540ebc6ebe9da3836911214ce4c1"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.961097 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be4a-account-create-update-kqxnz" event={"ID":"c1a48640-b2e1-4f03-8579-e4983204deb9","Type":"ContainerStarted","Data":"d4f61bbc0152a12cac1c848382d91ef6dccb61ea469e9ccacb797687a7353a54"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.963222 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" containerID="88a537c7298f4e11c8a5130a7c8d7a8af1cf483855796556c0eefbc2250dd181" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.963276 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mc8vn" event={"ID":"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf","Type":"ContainerDied","Data":"88a537c7298f4e11c8a5130a7c8d7a8af1cf483855796556c0eefbc2250dd181"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.963298 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mc8vn" event={"ID":"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf","Type":"ContainerStarted","Data":"5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.964662 4764 generic.go:334] "Generic (PLEG): container finished" podID="a5c96778-194f-4ad0-bd79-5a60e60f70f6" containerID="01a037e4ea618811c59c83eb27e1982033b5f300ad25748a2a387f0ee7be1b09" exitCode=0 Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.964714 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fe0-account-create-update-xjp2h" event={"ID":"a5c96778-194f-4ad0-bd79-5a60e60f70f6","Type":"ContainerDied","Data":"01a037e4ea618811c59c83eb27e1982033b5f300ad25748a2a387f0ee7be1b09"} Jan 27 07:33:20 crc kubenswrapper[4764]: I0127 07:33:20.964761 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fe0-account-create-update-xjp2h" event={"ID":"a5c96778-194f-4ad0-bd79-5a60e60f70f6","Type":"ContainerStarted","Data":"273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.362730 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.463711 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl6ml\" (UniqueName: \"kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml\") pod \"32d04865-44ab-4be1-be5d-01347984e03f\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.463806 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts\") pod \"32d04865-44ab-4be1-be5d-01347984e03f\" (UID: \"32d04865-44ab-4be1-be5d-01347984e03f\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.466838 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32d04865-44ab-4be1-be5d-01347984e03f" (UID: "32d04865-44ab-4be1-be5d-01347984e03f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.476835 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml" (OuterVolumeSpecName: "kube-api-access-sl6ml") pod "32d04865-44ab-4be1-be5d-01347984e03f" (UID: "32d04865-44ab-4be1-be5d-01347984e03f"). InnerVolumeSpecName "kube-api-access-sl6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.512074 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9rh4r"] Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.518163 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9rh4r"] Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.568637 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl6ml\" (UniqueName: \"kubernetes.io/projected/32d04865-44ab-4be1-be5d-01347984e03f-kube-api-access-sl6ml\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.568692 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d04865-44ab-4be1-be5d-01347984e03f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.609762 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6dltf" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.616387 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.629275 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.638592 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.652264 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.664934 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.669670 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts\") pod \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.669765 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbrh\" (UniqueName: \"kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh\") pod \"c1a48640-b2e1-4f03-8579-e4983204deb9\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.669802 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts\") pod \"c1a48640-b2e1-4f03-8579-e4983204deb9\" (UID: \"c1a48640-b2e1-4f03-8579-e4983204deb9\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.669904 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2c29\" (UniqueName: \"kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29\") pod \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\" (UID: \"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.670788 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" (UID: "720c25d4-64d8-44fe-8d52-fa50f8fc3b2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.670828 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1a48640-b2e1-4f03-8579-e4983204deb9" (UID: "c1a48640-b2e1-4f03-8579-e4983204deb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.673828 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh" (OuterVolumeSpecName: "kube-api-access-ztbrh") pod "c1a48640-b2e1-4f03-8579-e4983204deb9" (UID: "c1a48640-b2e1-4f03-8579-e4983204deb9"). InnerVolumeSpecName "kube-api-access-ztbrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.674346 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29" (OuterVolumeSpecName: "kube-api-access-g2c29") pod "720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" (UID: "720c25d4-64d8-44fe-8d52-fa50f8fc3b2d"). InnerVolumeSpecName "kube-api-access-g2c29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771193 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771276 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771309 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts\") pod \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771337 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771473 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9qjl\" (UniqueName: \"kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl\") pod \"9ea70529-f5f1-4fce-b115-4f3274e995a5\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771490 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvqg8\" (UniqueName: \"kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8\") pod \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771536 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vchq5\" (UniqueName: \"kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5\") pod \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\" (UID: \"a5c96778-194f-4ad0-bd79-5a60e60f70f6\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771695 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts\") pod \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\" (UID: \"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771776 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771811 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts\") pod \"9ea70529-f5f1-4fce-b115-4f3274e995a5\" (UID: \"9ea70529-f5f1-4fce-b115-4f3274e995a5\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771838 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771875 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj685\" (UniqueName: \"kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.771906 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf\") pod \"5bc23877-3f5d-40bd-a1ee-4589e777beec\" (UID: \"5bc23877-3f5d-40bd-a1ee-4589e777beec\") " Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772054 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5c96778-194f-4ad0-bd79-5a60e60f70f6" (UID: "a5c96778-194f-4ad0-bd79-5a60e60f70f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772409 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbrh\" (UniqueName: \"kubernetes.io/projected/c1a48640-b2e1-4f03-8579-e4983204deb9-kube-api-access-ztbrh\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772456 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a48640-b2e1-4f03-8579-e4983204deb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772470 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5c96778-194f-4ad0-bd79-5a60e60f70f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772482 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2c29\" (UniqueName: \"kubernetes.io/projected/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-kube-api-access-g2c29\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772493 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772491 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.772491 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.773020 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ea70529-f5f1-4fce-b115-4f3274e995a5" (UID: "9ea70529-f5f1-4fce-b115-4f3274e995a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.773484 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" (UID: "0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.775190 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5" (OuterVolumeSpecName: "kube-api-access-vchq5") pod "a5c96778-194f-4ad0-bd79-5a60e60f70f6" (UID: "a5c96778-194f-4ad0-bd79-5a60e60f70f6"). InnerVolumeSpecName "kube-api-access-vchq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.775696 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685" (OuterVolumeSpecName: "kube-api-access-kj685") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "kube-api-access-kj685". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.776031 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8" (OuterVolumeSpecName: "kube-api-access-lvqg8") pod "0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" (UID: "0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf"). InnerVolumeSpecName "kube-api-access-lvqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.776199 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl" (OuterVolumeSpecName: "kube-api-access-j9qjl") pod "9ea70529-f5f1-4fce-b115-4f3274e995a5" (UID: "9ea70529-f5f1-4fce-b115-4f3274e995a5"). InnerVolumeSpecName "kube-api-access-j9qjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.779257 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.788740 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts" (OuterVolumeSpecName: "scripts") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.795516 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.808543 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bc23877-3f5d-40bd-a1ee-4589e777beec" (UID: "5bc23877-3f5d-40bd-a1ee-4589e777beec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874288 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vchq5\" (UniqueName: \"kubernetes.io/projected/a5c96778-194f-4ad0-bd79-5a60e60f70f6-kube-api-access-vchq5\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874368 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874378 4764 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874389 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea70529-f5f1-4fce-b115-4f3274e995a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874398 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874406 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj685\" (UniqueName: \"kubernetes.io/projected/5bc23877-3f5d-40bd-a1ee-4589e777beec-kube-api-access-kj685\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874414 4764 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bc23877-3f5d-40bd-a1ee-4589e777beec-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874422 4764 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874430 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bc23877-3f5d-40bd-a1ee-4589e777beec-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874465 4764 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bc23877-3f5d-40bd-a1ee-4589e777beec-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874477 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9qjl\" (UniqueName: \"kubernetes.io/projected/9ea70529-f5f1-4fce-b115-4f3274e995a5-kube-api-access-j9qjl\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.874487 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvqg8\" (UniqueName: \"kubernetes.io/projected/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf-kube-api-access-lvqg8\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.982906 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbzj6" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.983131 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbzj6" event={"ID":"32d04865-44ab-4be1-be5d-01347984e03f","Type":"ContainerDied","Data":"d9330185db84e414e0ad919fc441f0f47e66b677bcf737e0ef265f01e5df82ae"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.983170 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9330185db84e414e0ad919fc441f0f47e66b677bcf737e0ef265f01e5df82ae" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.985062 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fdxs" event={"ID":"5bc23877-3f5d-40bd-a1ee-4589e777beec","Type":"ContainerDied","Data":"987ec83855c1f0074b4d5e8155b93321327f4f0e235e0871746e5102b1c66e6d"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.985085 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="987ec83855c1f0074b4d5e8155b93321327f4f0e235e0871746e5102b1c66e6d" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.985178 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fdxs" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.987529 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9a77-account-create-update-ql2vm" event={"ID":"9ea70529-f5f1-4fce-b115-4f3274e995a5","Type":"ContainerDied","Data":"81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.987573 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81bc6c93c61ef3918adb4e40f2bd09717bcd4689950a5fa2b13acd25d1bd10be" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.987547 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9a77-account-create-update-ql2vm" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.989523 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be4a-account-create-update-kqxnz" event={"ID":"c1a48640-b2e1-4f03-8579-e4983204deb9","Type":"ContainerDied","Data":"d4f61bbc0152a12cac1c848382d91ef6dccb61ea469e9ccacb797687a7353a54"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.989578 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f61bbc0152a12cac1c848382d91ef6dccb61ea469e9ccacb797687a7353a54" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.989651 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be4a-account-create-update-kqxnz" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.992787 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mc8vn" event={"ID":"0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf","Type":"ContainerDied","Data":"5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.992825 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce88f6436953d13978939dc1acd3a0c14883030e0d4afbaf27bd4c8dad92a75" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.992793 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mc8vn" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.995705 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fe0-account-create-update-xjp2h" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.995715 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fe0-account-create-update-xjp2h" event={"ID":"a5c96778-194f-4ad0-bd79-5a60e60f70f6","Type":"ContainerDied","Data":"273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.995745 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273183e36045ee6d11b4f9af429c3580f0318c3e4516d0567dc9a708f4bc632a" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.997771 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6dltf" event={"ID":"720c25d4-64d8-44fe-8d52-fa50f8fc3b2d","Type":"ContainerDied","Data":"fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530"} Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.997793 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef633be7133a565147b177951be620cc98405d141d0cc76bad964252ce8b530" Jan 27 07:33:22 crc kubenswrapper[4764]: I0127 07:33:22.997848 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6dltf" Jan 27 07:33:23 crc kubenswrapper[4764]: I0127 07:33:23.762293 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:33:23 crc kubenswrapper[4764]: I0127 07:33:23.762361 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:33:23 crc kubenswrapper[4764]: I0127 07:33:23.849576 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.451674 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b7ec40-c80d-4e38-9f86-e5494164b1c2" path="/var/lib/kubelet/pods/33b7ec40-c80d-4e38-9f86-e5494164b1c2/volumes" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.497600 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rw2w4" podUID="cad0e5a9-459c-4f9b-865b-ddc533316170" containerName="ovn-controller" probeResult="failure" output=< Jan 27 07:33:24 crc kubenswrapper[4764]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 07:33:24 crc kubenswrapper[4764]: > Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.723833 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qln9d"] Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724429 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="init" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724474 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="init" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724496 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d04865-44ab-4be1-be5d-01347984e03f" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724505 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d04865-44ab-4be1-be5d-01347984e03f" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724526 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724534 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724549 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc23877-3f5d-40bd-a1ee-4589e777beec" containerName="swift-ring-rebalance" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724557 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc23877-3f5d-40bd-a1ee-4589e777beec" containerName="swift-ring-rebalance" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724571 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724580 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724598 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c96778-194f-4ad0-bd79-5a60e60f70f6" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724606 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c96778-194f-4ad0-bd79-5a60e60f70f6" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724619 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea70529-f5f1-4fce-b115-4f3274e995a5" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724628 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea70529-f5f1-4fce-b115-4f3274e995a5" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724641 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="dnsmasq-dns" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724648 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="dnsmasq-dns" Jan 27 07:33:24 crc kubenswrapper[4764]: E0127 07:33:24.724657 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a48640-b2e1-4f03-8579-e4983204deb9" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724665 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a48640-b2e1-4f03-8579-e4983204deb9" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724879 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea70529-f5f1-4fce-b115-4f3274e995a5" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724894 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724905 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5c96778-194f-4ad0-bd79-5a60e60f70f6" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724957 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35324f3-25dd-4b25-8932-1d02eddcdd15" containerName="dnsmasq-dns" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724973 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a48640-b2e1-4f03-8579-e4983204deb9" containerName="mariadb-account-create-update" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724989 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.724998 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d04865-44ab-4be1-be5d-01347984e03f" containerName="mariadb-database-create" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.725007 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc23877-3f5d-40bd-a1ee-4589e777beec" containerName="swift-ring-rebalance" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.725877 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.731326 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qln9d"] Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.781814 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.783092 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gr88t" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.809077 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.809125 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.809176 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.809271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7nq\" (UniqueName: \"kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.910321 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.910465 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7nq\" (UniqueName: \"kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.910499 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.910517 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.919788 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.922153 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.927026 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:24 crc kubenswrapper[4764]: I0127 07:33:24.927645 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7nq\" (UniqueName: \"kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq\") pod \"glance-db-sync-qln9d\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:25 crc kubenswrapper[4764]: I0127 07:33:25.101814 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:25 crc kubenswrapper[4764]: W0127 07:33:25.616257 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac97941_f22c_4599_9674_eeebb1347a85.slice/crio-255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248 WatchSource:0}: Error finding container 255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248: Status 404 returned error can't find the container with id 255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248 Jan 27 07:33:25 crc kubenswrapper[4764]: I0127 07:33:25.618692 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qln9d"] Jan 27 07:33:26 crc kubenswrapper[4764]: I0127 07:33:26.016703 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qln9d" event={"ID":"2ac97941-f22c-4599-9674-eeebb1347a85","Type":"ContainerStarted","Data":"255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248"} Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.025610 4764 generic.go:334] "Generic (PLEG): container finished" podID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerID="7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8" exitCode=0 Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.025690 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerDied","Data":"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8"} Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.030185 4764 generic.go:334] "Generic (PLEG): container finished" podID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerID="0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a" exitCode=0 Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.030223 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerDied","Data":"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a"} Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.529131 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dh7v6"] Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.534589 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.539772 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.543101 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dh7v6"] Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.554163 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9st\" (UniqueName: \"kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.554226 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.655851 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9st\" (UniqueName: \"kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.656273 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.656859 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.683242 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9st\" (UniqueName: \"kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st\") pod \"root-account-create-update-dh7v6\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:27 crc kubenswrapper[4764]: I0127 07:33:27.850364 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.053186 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerStarted","Data":"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830"} Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.053508 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.058885 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerStarted","Data":"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189"} Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.091857 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.116074567 podStartE2EDuration="1m4.091834361s" podCreationTimestamp="2026-01-27 07:32:24 +0000 UTC" firstStartedPulling="2026-01-27 07:32:40.7233089 +0000 UTC m=+973.318931446" lastFinishedPulling="2026-01-27 07:32:49.699068704 +0000 UTC m=+982.294691240" observedRunningTime="2026-01-27 07:33:28.081129487 +0000 UTC m=+1020.676752013" watchObservedRunningTime="2026-01-27 07:33:28.091834361 +0000 UTC m=+1020.687456887" Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.113121 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.277409203 podStartE2EDuration="1m4.113099184s" podCreationTimestamp="2026-01-27 07:32:24 +0000 UTC" firstStartedPulling="2026-01-27 07:32:42.318153346 +0000 UTC m=+974.913775872" lastFinishedPulling="2026-01-27 07:32:50.153843327 +0000 UTC m=+982.749465853" observedRunningTime="2026-01-27 07:33:28.108299187 +0000 UTC m=+1020.703921713" watchObservedRunningTime="2026-01-27 07:33:28.113099184 +0000 UTC m=+1020.708721710" Jan 27 07:33:28 crc kubenswrapper[4764]: I0127 07:33:28.291765 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dh7v6"] Jan 27 07:33:28 crc kubenswrapper[4764]: W0127 07:33:28.294498 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fe7d723_e508_4c6a_ab0f_da07d92ed627.slice/crio-ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817 WatchSource:0}: Error finding container ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817: Status 404 returned error can't find the container with id ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817 Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.076297 4764 generic.go:334] "Generic (PLEG): container finished" podID="9fe7d723-e508-4c6a-ab0f-da07d92ed627" containerID="439ca86c334cafa6514b8cec21ddca6c03188ea71ef5316f713618c907142052" exitCode=0 Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.077401 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dh7v6" event={"ID":"9fe7d723-e508-4c6a-ab0f-da07d92ed627","Type":"ContainerDied","Data":"439ca86c334cafa6514b8cec21ddca6c03188ea71ef5316f713618c907142052"} Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.077424 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dh7v6" event={"ID":"9fe7d723-e508-4c6a-ab0f-da07d92ed627","Type":"ContainerStarted","Data":"ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817"} Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.482921 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rw2w4" podUID="cad0e5a9-459c-4f9b-865b-ddc533316170" containerName="ovn-controller" probeResult="failure" output=< Jan 27 07:33:29 crc kubenswrapper[4764]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 07:33:29 crc kubenswrapper[4764]: > Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.504073 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.513360 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-75gxq" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.737066 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rw2w4-config-wjjjz"] Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.738089 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.744721 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.759678 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4-config-wjjjz"] Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900343 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900405 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900431 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900470 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw52k\" (UniqueName: \"kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900502 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:29 crc kubenswrapper[4764]: I0127 07:33:29.900559 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.001998 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002098 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002150 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002180 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002205 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw52k\" (UniqueName: \"kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002249 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002706 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002789 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.002942 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.003127 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.004697 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.026958 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw52k\" (UniqueName: \"kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k\") pod \"ovn-controller-rw2w4-config-wjjjz\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.075807 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.466638 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.611921 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts\") pod \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.612043 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9st\" (UniqueName: \"kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st\") pod \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\" (UID: \"9fe7d723-e508-4c6a-ab0f-da07d92ed627\") " Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.613060 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fe7d723-e508-4c6a-ab0f-da07d92ed627" (UID: "9fe7d723-e508-4c6a-ab0f-da07d92ed627"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.618720 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st" (OuterVolumeSpecName: "kube-api-access-lm9st") pod "9fe7d723-e508-4c6a-ab0f-da07d92ed627" (UID: "9fe7d723-e508-4c6a-ab0f-da07d92ed627"). InnerVolumeSpecName "kube-api-access-lm9st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.636419 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4-config-wjjjz"] Jan 27 07:33:30 crc kubenswrapper[4764]: W0127 07:33:30.652797 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5039b653_5b89_426a_9b2c_7414871514b2.slice/crio-bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa WatchSource:0}: Error finding container bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa: Status 404 returned error can't find the container with id bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.714290 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe7d723-e508-4c6a-ab0f-da07d92ed627-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:30 crc kubenswrapper[4764]: I0127 07:33:30.714316 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9st\" (UniqueName: \"kubernetes.io/projected/9fe7d723-e508-4c6a-ab0f-da07d92ed627-kube-api-access-lm9st\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.097796 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dh7v6" Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.097833 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dh7v6" event={"ID":"9fe7d723-e508-4c6a-ab0f-da07d92ed627","Type":"ContainerDied","Data":"ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817"} Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.098621 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4fc45c65fe5b5e6dc6487aa585d05fb0aca4bd9136c0e2a64c4866994ca817" Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.105244 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-wjjjz" event={"ID":"5039b653-5b89-426a-9b2c-7414871514b2","Type":"ContainerStarted","Data":"899ef6d9606da1a8acae8a0882e2b66b47b4f7084bace38a2928a69f1cbb4c87"} Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.105271 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-wjjjz" event={"ID":"5039b653-5b89-426a-9b2c-7414871514b2","Type":"ContainerStarted","Data":"bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa"} Jan 27 07:33:31 crc kubenswrapper[4764]: I0127 07:33:31.125848 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rw2w4-config-wjjjz" podStartSLOduration=2.12583028 podStartE2EDuration="2.12583028s" podCreationTimestamp="2026-01-27 07:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:31.124376961 +0000 UTC m=+1023.719999497" watchObservedRunningTime="2026-01-27 07:33:31.12583028 +0000 UTC m=+1023.721452806" Jan 27 07:33:32 crc kubenswrapper[4764]: I0127 07:33:32.114713 4764 generic.go:334] "Generic (PLEG): container finished" podID="5039b653-5b89-426a-9b2c-7414871514b2" containerID="899ef6d9606da1a8acae8a0882e2b66b47b4f7084bace38a2928a69f1cbb4c87" exitCode=0 Jan 27 07:33:32 crc kubenswrapper[4764]: I0127 07:33:32.114768 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-wjjjz" event={"ID":"5039b653-5b89-426a-9b2c-7414871514b2","Type":"ContainerDied","Data":"899ef6d9606da1a8acae8a0882e2b66b47b4f7084bace38a2928a69f1cbb4c87"} Jan 27 07:33:34 crc kubenswrapper[4764]: I0127 07:33:34.481529 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rw2w4" Jan 27 07:33:34 crc kubenswrapper[4764]: I0127 07:33:34.585362 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:34 crc kubenswrapper[4764]: I0127 07:33:34.594359 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f481ed8-7f32-478b-88ce-6caaa3a42074-etc-swift\") pod \"swift-storage-0\" (UID: \"3f481ed8-7f32-478b-88ce-6caaa3a42074\") " pod="openstack/swift-storage-0" Jan 27 07:33:34 crc kubenswrapper[4764]: I0127 07:33:34.764316 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 07:33:36 crc kubenswrapper[4764]: I0127 07:33:36.377534 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.504475 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.635264 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.636390 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.636449 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.636493 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.636551 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.636580 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw52k\" (UniqueName: \"kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k\") pod \"5039b653-5b89-426a-9b2c-7414871514b2\" (UID: \"5039b653-5b89-426a-9b2c-7414871514b2\") " Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.635559 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run" (OuterVolumeSpecName: "var-run") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.637774 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.637893 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.638264 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.638821 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts" (OuterVolumeSpecName: "scripts") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.640279 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k" (OuterVolumeSpecName: "kube-api-access-xw52k") pod "5039b653-5b89-426a-9b2c-7414871514b2" (UID: "5039b653-5b89-426a-9b2c-7414871514b2"). InnerVolumeSpecName "kube-api-access-xw52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738050 4764 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738083 4764 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738094 4764 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738103 4764 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5039b653-5b89-426a-9b2c-7414871514b2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738118 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5039b653-5b89-426a-9b2c-7414871514b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:37 crc kubenswrapper[4764]: I0127 07:33:37.738128 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw52k\" (UniqueName: \"kubernetes.io/projected/5039b653-5b89-426a-9b2c-7414871514b2-kube-api-access-xw52k\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.038365 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 07:33:38 crc kubenswrapper[4764]: W0127 07:33:38.053288 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f481ed8_7f32_478b_88ce_6caaa3a42074.slice/crio-a8844bfe24ad179679bc663b859a300626fdbbefe2a2227a8ac09e886a662562 WatchSource:0}: Error finding container a8844bfe24ad179679bc663b859a300626fdbbefe2a2227a8ac09e886a662562: Status 404 returned error can't find the container with id a8844bfe24ad179679bc663b859a300626fdbbefe2a2227a8ac09e886a662562 Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.158341 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"a8844bfe24ad179679bc663b859a300626fdbbefe2a2227a8ac09e886a662562"} Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.159932 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-wjjjz" event={"ID":"5039b653-5b89-426a-9b2c-7414871514b2","Type":"ContainerDied","Data":"bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa"} Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.159958 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc71c251e4a58e2efd7944df7af661f461db83bca4bd6d42d03b8851194961aa" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.160717 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-wjjjz" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.161338 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qln9d" event={"ID":"2ac97941-f22c-4599-9674-eeebb1347a85","Type":"ContainerStarted","Data":"66024f67d7623815f69a30dbdc79809a5cd0a723f80f6594d3405443669adc91"} Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.181582 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qln9d" podStartSLOduration=2.277948025 podStartE2EDuration="14.181558205s" podCreationTimestamp="2026-01-27 07:33:24 +0000 UTC" firstStartedPulling="2026-01-27 07:33:25.618314326 +0000 UTC m=+1018.213936852" lastFinishedPulling="2026-01-27 07:33:37.521924506 +0000 UTC m=+1030.117547032" observedRunningTime="2026-01-27 07:33:38.173669866 +0000 UTC m=+1030.769292402" watchObservedRunningTime="2026-01-27 07:33:38.181558205 +0000 UTC m=+1030.777180731" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.621347 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rw2w4-config-wjjjz"] Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.630184 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rw2w4-config-wjjjz"] Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.722355 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rw2w4-config-vk6bb"] Jan 27 07:33:38 crc kubenswrapper[4764]: E0127 07:33:38.722786 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe7d723-e508-4c6a-ab0f-da07d92ed627" containerName="mariadb-account-create-update" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.722808 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe7d723-e508-4c6a-ab0f-da07d92ed627" containerName="mariadb-account-create-update" Jan 27 07:33:38 crc kubenswrapper[4764]: E0127 07:33:38.722835 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039b653-5b89-426a-9b2c-7414871514b2" containerName="ovn-config" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.722843 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039b653-5b89-426a-9b2c-7414871514b2" containerName="ovn-config" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.723060 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe7d723-e508-4c6a-ab0f-da07d92ed627" containerName="mariadb-account-create-update" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.723094 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039b653-5b89-426a-9b2c-7414871514b2" containerName="ovn-config" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.723727 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.726677 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.737169 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4-config-vk6bb"] Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858354 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858432 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858483 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858627 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858740 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cl5h\" (UniqueName: \"kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.858817 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.965523 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.965593 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.965619 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.966062 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.966654 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.967703 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.967771 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.968146 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.968246 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cl5h\" (UniqueName: \"kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.968305 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.968407 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:38 crc kubenswrapper[4764]: I0127 07:33:38.990451 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cl5h\" (UniqueName: \"kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h\") pod \"ovn-controller-rw2w4-config-vk6bb\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:39 crc kubenswrapper[4764]: I0127 07:33:39.044312 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:39 crc kubenswrapper[4764]: I0127 07:33:39.741776 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rw2w4-config-vk6bb"] Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.176146 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-vk6bb" event={"ID":"2dfdd5f5-0044-4c15-8a05-37258f074d30","Type":"ContainerStarted","Data":"2f3bc34933aeb975c4fe9701d8626d55f49ca5136ccc5756d0fcfbe09632b069"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.176618 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-vk6bb" event={"ID":"2dfdd5f5-0044-4c15-8a05-37258f074d30","Type":"ContainerStarted","Data":"4f2f947747fa05b798f69ccfb2f7d3eaf9dbafdb382c5357158c27c0793d69ae"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.178924 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"c61289e0b18c1367b65d2afc0ad3790eee947c8da6c361faf196a626cc2e2996"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.178967 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"edb697fbec41fb7815c24c23e80b405293d89dbd4978cb98591144b23a51fd0a"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.178980 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"5f0bff3626e576481e160ca7f5b1005d15ed09dc92acd91593ac06bd052185a1"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.178990 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"774055e48f5839e0e266b40bc8ad20cc1ee9942286b354c51a0d9773f2261a18"} Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.197278 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rw2w4-config-vk6bb" podStartSLOduration=2.197257017 podStartE2EDuration="2.197257017s" podCreationTimestamp="2026-01-27 07:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:40.195787678 +0000 UTC m=+1032.791410204" watchObservedRunningTime="2026-01-27 07:33:40.197257017 +0000 UTC m=+1032.792879543" Jan 27 07:33:40 crc kubenswrapper[4764]: I0127 07:33:40.455617 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5039b653-5b89-426a-9b2c-7414871514b2" path="/var/lib/kubelet/pods/5039b653-5b89-426a-9b2c-7414871514b2/volumes" Jan 27 07:33:41 crc kubenswrapper[4764]: I0127 07:33:41.193909 4764 generic.go:334] "Generic (PLEG): container finished" podID="2dfdd5f5-0044-4c15-8a05-37258f074d30" containerID="2f3bc34933aeb975c4fe9701d8626d55f49ca5136ccc5756d0fcfbe09632b069" exitCode=0 Jan 27 07:33:41 crc kubenswrapper[4764]: I0127 07:33:41.193996 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rw2w4-config-vk6bb" event={"ID":"2dfdd5f5-0044-4c15-8a05-37258f074d30","Type":"ContainerDied","Data":"2f3bc34933aeb975c4fe9701d8626d55f49ca5136ccc5756d0fcfbe09632b069"} Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.510501 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.635850 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636234 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.635984 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636280 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636298 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run" (OuterVolumeSpecName: "var-run") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636378 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636425 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cl5h\" (UniqueName: \"kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636481 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn\") pod \"2dfdd5f5-0044-4c15-8a05-37258f074d30\" (UID: \"2dfdd5f5-0044-4c15-8a05-37258f074d30\") " Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636722 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636901 4764 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636914 4764 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.636923 4764 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dfdd5f5-0044-4c15-8a05-37258f074d30-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.637387 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.637627 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts" (OuterVolumeSpecName: "scripts") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.641560 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h" (OuterVolumeSpecName: "kube-api-access-8cl5h") pod "2dfdd5f5-0044-4c15-8a05-37258f074d30" (UID: "2dfdd5f5-0044-4c15-8a05-37258f074d30"). InnerVolumeSpecName "kube-api-access-8cl5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.739126 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.739167 4764 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dfdd5f5-0044-4c15-8a05-37258f074d30-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.739187 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cl5h\" (UniqueName: \"kubernetes.io/projected/2dfdd5f5-0044-4c15-8a05-37258f074d30-kube-api-access-8cl5h\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.815813 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rw2w4-config-vk6bb"] Jan 27 07:33:42 crc kubenswrapper[4764]: I0127 07:33:42.825942 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rw2w4-config-vk6bb"] Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.221019 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2f947747fa05b798f69ccfb2f7d3eaf9dbafdb382c5357158c27c0793d69ae" Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.221038 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rw2w4-config-vk6bb" Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.224932 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"b7a5205cd188da2ab1709b4e17f8e5a1b485a881ae614dcdb7d3822175e78085"} Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.224977 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"ed7cf5e42d6a99d3e1df0f9869510869d11f66caea58a071b90627119db75c4e"} Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.224992 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"159a98f4b13d8baf4a3a0b8e2427f4828d8647a61e72a0a50022c79c0909de14"} Jan 27 07:33:43 crc kubenswrapper[4764]: I0127 07:33:43.225002 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"d9f81b2bebcd10076f948cd3fd142613d6dddb49296f1521f91e1a61f84cccb2"} Jan 27 07:33:44 crc kubenswrapper[4764]: I0127 07:33:44.237681 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"a8a5296dcf3ca8f610a6af7b62796690cc42f529ea3ceec8f6cd36ef4afe9534"} Jan 27 07:33:44 crc kubenswrapper[4764]: I0127 07:33:44.238039 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"f5b1d2231f9643eff7184fbdfcd955502f2d738eb5569d50805310edc26d4c8b"} Jan 27 07:33:44 crc kubenswrapper[4764]: I0127 07:33:44.448843 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfdd5f5-0044-4c15-8a05-37258f074d30" path="/var/lib/kubelet/pods/2dfdd5f5-0044-4c15-8a05-37258f074d30/volumes" Jan 27 07:33:45 crc kubenswrapper[4764]: I0127 07:33:45.252738 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"dac5056a45eeed90e7d81689564a174fbea1dc31542049de1d4594d580a981c2"} Jan 27 07:33:45 crc kubenswrapper[4764]: I0127 07:33:45.253145 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"11a8c49b89858f754db90cc0bccb5ada9d5522ad3061e0fb315d76b5dd4c9af5"} Jan 27 07:33:45 crc kubenswrapper[4764]: I0127 07:33:45.254314 4764 generic.go:334] "Generic (PLEG): container finished" podID="2ac97941-f22c-4599-9674-eeebb1347a85" containerID="66024f67d7623815f69a30dbdc79809a5cd0a723f80f6594d3405443669adc91" exitCode=0 Jan 27 07:33:45 crc kubenswrapper[4764]: I0127 07:33:45.254339 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qln9d" event={"ID":"2ac97941-f22c-4599-9674-eeebb1347a85","Type":"ContainerDied","Data":"66024f67d7623815f69a30dbdc79809a5cd0a723f80f6594d3405443669adc91"} Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.312726 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.383053 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.737700 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.768974 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-f57wb"] Jan 27 07:33:46 crc kubenswrapper[4764]: E0127 07:33:46.769372 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfdd5f5-0044-4c15-8a05-37258f074d30" containerName="ovn-config" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.769395 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfdd5f5-0044-4c15-8a05-37258f074d30" containerName="ovn-config" Jan 27 07:33:46 crc kubenswrapper[4764]: E0127 07:33:46.769455 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac97941-f22c-4599-9674-eeebb1347a85" containerName="glance-db-sync" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.769464 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac97941-f22c-4599-9674-eeebb1347a85" containerName="glance-db-sync" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.769691 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac97941-f22c-4599-9674-eeebb1347a85" containerName="glance-db-sync" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.769714 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfdd5f5-0044-4c15-8a05-37258f074d30" containerName="ovn-config" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.770252 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.798505 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-232f-account-create-update-lzptk"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.799581 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.802676 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.805731 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-232f-account-create-update-lzptk"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.809906 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data\") pod \"2ac97941-f22c-4599-9674-eeebb1347a85\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.810029 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data\") pod \"2ac97941-f22c-4599-9674-eeebb1347a85\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.810055 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q7nq\" (UniqueName: \"kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq\") pod \"2ac97941-f22c-4599-9674-eeebb1347a85\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.810093 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle\") pod \"2ac97941-f22c-4599-9674-eeebb1347a85\" (UID: \"2ac97941-f22c-4599-9674-eeebb1347a85\") " Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.817173 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f57wb"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.817411 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ac97941-f22c-4599-9674-eeebb1347a85" (UID: "2ac97941-f22c-4599-9674-eeebb1347a85"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.825711 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq" (OuterVolumeSpecName: "kube-api-access-9q7nq") pod "2ac97941-f22c-4599-9674-eeebb1347a85" (UID: "2ac97941-f22c-4599-9674-eeebb1347a85"). InnerVolumeSpecName "kube-api-access-9q7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.880521 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cfvvc"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.884368 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.890191 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac97941-f22c-4599-9674-eeebb1347a85" (UID: "2ac97941-f22c-4599-9674-eeebb1347a85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.896064 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data" (OuterVolumeSpecName: "config-data") pod "2ac97941-f22c-4599-9674-eeebb1347a85" (UID: "2ac97941-f22c-4599-9674-eeebb1347a85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.911176 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cfvvc"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912277 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912338 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjg5t\" (UniqueName: \"kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912370 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912394 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvkm\" (UniqueName: \"kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912476 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912490 4764 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912514 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q7nq\" (UniqueName: \"kubernetes.io/projected/2ac97941-f22c-4599-9674-eeebb1347a85-kube-api-access-9q7nq\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.912523 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac97941-f22c-4599-9674-eeebb1347a85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.922136 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1f49-account-create-update-hdh2b"] Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.923121 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.926216 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 07:33:46 crc kubenswrapper[4764]: I0127 07:33:46.933311 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1f49-account-create-update-hdh2b"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.013851 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014131 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014153 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014169 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcx8j\" (UniqueName: \"kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014194 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjg5t\" (UniqueName: \"kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014221 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014246 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvkm\" (UniqueName: \"kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014285 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2p56\" (UniqueName: \"kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.014782 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.015026 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.030772 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjg5t\" (UniqueName: \"kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t\") pod \"cinder-232f-account-create-update-lzptk\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.031368 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvkm\" (UniqueName: \"kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm\") pod \"barbican-db-create-f57wb\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.071395 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cwtw5"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.072495 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.081379 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cwtw5"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.092894 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.115807 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.115862 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.115887 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcx8j\" (UniqueName: \"kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.115994 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2p56\" (UniqueName: \"kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.116680 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.117332 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.120414 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.130260 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zgzzx"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.131958 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.134684 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.135219 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2p56\" (UniqueName: \"kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56\") pod \"barbican-1f49-account-create-update-hdh2b\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.135365 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcx8j\" (UniqueName: \"kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j\") pod \"cinder-db-create-cfvvc\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.139597 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.139983 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88tdq" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.140185 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.145950 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zgzzx"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.190678 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9636-account-create-update-lj4pb"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.191838 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.193912 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.207587 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9636-account-create-update-lj4pb"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.222536 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.222599 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxpl\" (UniqueName: \"kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.222703 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.222731 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.222762 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsm9c\" (UniqueName: \"kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.242769 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.251880 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.271523 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qln9d" event={"ID":"2ac97941-f22c-4599-9674-eeebb1347a85","Type":"ContainerDied","Data":"255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248"} Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.271559 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255ad8fd79255e1101def6f0e78e33b7c4a675d3653a64c50e83dfe08640a248" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.271632 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qln9d" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324116 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsm9c\" (UniqueName: \"kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324260 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324309 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxpl\" (UniqueName: \"kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324348 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjtf\" (UniqueName: \"kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324415 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324500 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.324534 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.325323 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.327960 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.330003 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.341047 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxpl\" (UniqueName: \"kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl\") pod \"neutron-db-create-cwtw5\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.341360 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsm9c\" (UniqueName: \"kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c\") pod \"keystone-db-sync-zgzzx\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.387773 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.425983 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjtf\" (UniqueName: \"kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.426059 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.426759 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.447394 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.449178 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjtf\" (UniqueName: \"kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf\") pod \"neutron-9636-account-create-update-lj4pb\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.522426 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.616913 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.618586 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.634956 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.736801 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnms\" (UniqueName: \"kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.736851 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.736899 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.737008 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.737030 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.838771 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.838827 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.838899 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnms\" (UniqueName: \"kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.838927 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.838989 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.839689 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.839880 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.840100 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.840255 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:47 crc kubenswrapper[4764]: I0127 07:33:47.880797 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnms\" (UniqueName: \"kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms\") pod \"dnsmasq-dns-79778dbd8c-wbwxx\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:47.997932 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.555798 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f57wb"] Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.725686 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cfvvc"] Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.741023 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-232f-account-create-update-lzptk"] Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.764404 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9636-account-create-update-lj4pb"] Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.779623 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1f49-account-create-update-hdh2b"] Jan 27 07:33:48 crc kubenswrapper[4764]: W0127 07:33:48.780651 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3e76902_0634_44b5_bb6b_0cdf63efaf87.slice/crio-ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3 WatchSource:0}: Error finding container ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3: Status 404 returned error can't find the container with id ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3 Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.789897 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cwtw5"] Jan 27 07:33:48 crc kubenswrapper[4764]: W0127 07:33:48.791947 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c458dd5_a975_4d75_83f4_c58184f63ab2.slice/crio-edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5 WatchSource:0}: Error finding container edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5: Status 404 returned error can't find the container with id edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5 Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.799686 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zgzzx"] Jan 27 07:33:48 crc kubenswrapper[4764]: I0127 07:33:48.960291 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.290120 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zgzzx" event={"ID":"1070498a-e8fe-43a6-b6d3-4a2862f24fee","Type":"ContainerStarted","Data":"eefccdadfcb514443825e604ff6cda32d997151b1bf51b48401db36ab305c5f9"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.292842 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerStarted","Data":"9a2cb4d223ee98b2c8079996d42da44b8c7ae505a4e928bcc128518350ec9c96"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.292882 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerStarted","Data":"2113a1bc896d2cea23002e5f001437378ff58021cdf419109262ef4bb3b8b610"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.294387 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cfvvc" event={"ID":"23a2109a-990b-4435-a62e-4b4ca7d52c1e","Type":"ContainerStarted","Data":"1bffcb01def431b5e658cd2aaed8ddad47711933995b6aa74f7af7ddb1d57bd7"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.294416 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cfvvc" event={"ID":"23a2109a-990b-4435-a62e-4b4ca7d52c1e","Type":"ContainerStarted","Data":"44cebcc1cb6c2a734c55aa16457ae3605f783ec607fc95d57fff3e631be07a26"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.301702 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwtw5" event={"ID":"f3e76902-0634-44b5-bb6b-0cdf63efaf87","Type":"ContainerStarted","Data":"3a01c9c23b9dea86eb21b397da6cf942c2bb5e2b4853f1e14a76f623554841c1"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.301747 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwtw5" event={"ID":"f3e76902-0634-44b5-bb6b-0cdf63efaf87","Type":"ContainerStarted","Data":"ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.310012 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9636-account-create-update-lj4pb" event={"ID":"4a7123da-ea98-40d7-bed5-0cdbafc74ca1","Type":"ContainerStarted","Data":"e4acb1c925b44d9763be14d6bc219c5fd882d01ab50b508b7a3944c82cddaacd"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.310068 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9636-account-create-update-lj4pb" event={"ID":"4a7123da-ea98-40d7-bed5-0cdbafc74ca1","Type":"ContainerStarted","Data":"00f168a1535cb57403dfe1d49a162299a32b66ea48f0128bbabe3af26c2a9b88"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.313230 4764 generic.go:334] "Generic (PLEG): container finished" podID="8fde588f-3a41-400e-9dd9-42ffb66989db" containerID="549c8af9d6b505397ae223e136c1c4660908ad4f53bc194b052646ff4282784e" exitCode=0 Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.313287 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f57wb" event={"ID":"8fde588f-3a41-400e-9dd9-42ffb66989db","Type":"ContainerDied","Data":"549c8af9d6b505397ae223e136c1c4660908ad4f53bc194b052646ff4282784e"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.313311 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f57wb" event={"ID":"8fde588f-3a41-400e-9dd9-42ffb66989db","Type":"ContainerStarted","Data":"44ac5aabc07909229e7818ccdd9635c87fbfc88ecd7fe1ac547f5dbf2d171610"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.315173 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1f49-account-create-update-hdh2b" event={"ID":"8c458dd5-a975-4d75-83f4-c58184f63ab2","Type":"ContainerStarted","Data":"f3c2b34a37faf07ff247bb50e686df1a1c83b87236b0c227ca075403ef9ef1b8"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.315203 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1f49-account-create-update-hdh2b" event={"ID":"8c458dd5-a975-4d75-83f4-c58184f63ab2","Type":"ContainerStarted","Data":"edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.324283 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"272f615dd51e9cdd95c0ff3ddffc50c303ded3008015b7b393fa1749958f3e1c"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.324322 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"3fca33c9aebbc41d077119c52cc1c317329b23ef5ed8a00d333621a55038aceb"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.329022 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-232f-account-create-update-lzptk" event={"ID":"13f98420-ea6f-40cb-b274-0bf3b3282252","Type":"ContainerStarted","Data":"1e947509ede9b58e3c4e2c08675c499070d87eb94f1529dbb8060d270d2837d2"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.329060 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-232f-account-create-update-lzptk" event={"ID":"13f98420-ea6f-40cb-b274-0bf3b3282252","Type":"ContainerStarted","Data":"b5f622dd604630d47d84321a7dd47a962fb28f5dfb8e5a4a06f3d1b8d9c29c83"} Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.344406 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-cwtw5" podStartSLOduration=2.344387315 podStartE2EDuration="2.344387315s" podCreationTimestamp="2026-01-27 07:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:49.341406007 +0000 UTC m=+1041.937028533" watchObservedRunningTime="2026-01-27 07:33:49.344387315 +0000 UTC m=+1041.940009841" Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.359270 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9636-account-create-update-lj4pb" podStartSLOduration=2.359252325 podStartE2EDuration="2.359252325s" podCreationTimestamp="2026-01-27 07:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:49.35372763 +0000 UTC m=+1041.949350156" watchObservedRunningTime="2026-01-27 07:33:49.359252325 +0000 UTC m=+1041.954874851" Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.401588 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-cfvvc" podStartSLOduration=3.40141522 podStartE2EDuration="3.40141522s" podCreationTimestamp="2026-01-27 07:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:49.372932093 +0000 UTC m=+1041.968554619" watchObservedRunningTime="2026-01-27 07:33:49.40141522 +0000 UTC m=+1041.997037736" Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.423126 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1f49-account-create-update-hdh2b" podStartSLOduration=3.423106459 podStartE2EDuration="3.423106459s" podCreationTimestamp="2026-01-27 07:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:49.422271737 +0000 UTC m=+1042.017894273" watchObservedRunningTime="2026-01-27 07:33:49.423106459 +0000 UTC m=+1042.018728985" Jan 27 07:33:49 crc kubenswrapper[4764]: I0127 07:33:49.440297 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-232f-account-create-update-lzptk" podStartSLOduration=3.440272909 podStartE2EDuration="3.440272909s" podCreationTimestamp="2026-01-27 07:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:49.437840735 +0000 UTC m=+1042.033463261" watchObservedRunningTime="2026-01-27 07:33:49.440272909 +0000 UTC m=+1042.035895435" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.338878 4764 generic.go:334] "Generic (PLEG): container finished" podID="f3e76902-0634-44b5-bb6b-0cdf63efaf87" containerID="3a01c9c23b9dea86eb21b397da6cf942c2bb5e2b4853f1e14a76f623554841c1" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.339110 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwtw5" event={"ID":"f3e76902-0634-44b5-bb6b-0cdf63efaf87","Type":"ContainerDied","Data":"3a01c9c23b9dea86eb21b397da6cf942c2bb5e2b4853f1e14a76f623554841c1"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.341158 4764 generic.go:334] "Generic (PLEG): container finished" podID="4a7123da-ea98-40d7-bed5-0cdbafc74ca1" containerID="e4acb1c925b44d9763be14d6bc219c5fd882d01ab50b508b7a3944c82cddaacd" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.341235 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9636-account-create-update-lj4pb" event={"ID":"4a7123da-ea98-40d7-bed5-0cdbafc74ca1","Type":"ContainerDied","Data":"e4acb1c925b44d9763be14d6bc219c5fd882d01ab50b508b7a3944c82cddaacd"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.353950 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f481ed8-7f32-478b-88ce-6caaa3a42074","Type":"ContainerStarted","Data":"750d83de8df0f8d4d02c992c9f13430349730b578730761c437e194e35f12c47"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.356386 4764 generic.go:334] "Generic (PLEG): container finished" podID="13f98420-ea6f-40cb-b274-0bf3b3282252" containerID="1e947509ede9b58e3c4e2c08675c499070d87eb94f1529dbb8060d270d2837d2" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.356453 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-232f-account-create-update-lzptk" event={"ID":"13f98420-ea6f-40cb-b274-0bf3b3282252","Type":"ContainerDied","Data":"1e947509ede9b58e3c4e2c08675c499070d87eb94f1529dbb8060d270d2837d2"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.357585 4764 generic.go:334] "Generic (PLEG): container finished" podID="f2172902-7472-4083-9070-86624562cdf5" containerID="9a2cb4d223ee98b2c8079996d42da44b8c7ae505a4e928bcc128518350ec9c96" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.357621 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerDied","Data":"9a2cb4d223ee98b2c8079996d42da44b8c7ae505a4e928bcc128518350ec9c96"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.365708 4764 generic.go:334] "Generic (PLEG): container finished" podID="23a2109a-990b-4435-a62e-4b4ca7d52c1e" containerID="1bffcb01def431b5e658cd2aaed8ddad47711933995b6aa74f7af7ddb1d57bd7" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.365787 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cfvvc" event={"ID":"23a2109a-990b-4435-a62e-4b4ca7d52c1e","Type":"ContainerDied","Data":"1bffcb01def431b5e658cd2aaed8ddad47711933995b6aa74f7af7ddb1d57bd7"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.379806 4764 generic.go:334] "Generic (PLEG): container finished" podID="8c458dd5-a975-4d75-83f4-c58184f63ab2" containerID="f3c2b34a37faf07ff247bb50e686df1a1c83b87236b0c227ca075403ef9ef1b8" exitCode=0 Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.380062 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1f49-account-create-update-hdh2b" event={"ID":"8c458dd5-a975-4d75-83f4-c58184f63ab2","Type":"ContainerDied","Data":"f3c2b34a37faf07ff247bb50e686df1a1c83b87236b0c227ca075403ef9ef1b8"} Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.485958 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.771169598 podStartE2EDuration="49.485933224s" podCreationTimestamp="2026-01-27 07:33:01 +0000 UTC" firstStartedPulling="2026-01-27 07:33:38.056139604 +0000 UTC m=+1030.651762130" lastFinishedPulling="2026-01-27 07:33:43.77090322 +0000 UTC m=+1036.366525756" observedRunningTime="2026-01-27 07:33:50.46936401 +0000 UTC m=+1043.064986536" watchObservedRunningTime="2026-01-27 07:33:50.485933224 +0000 UTC m=+1043.081555750" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.762580 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.772430 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.793959 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:33:50 crc kubenswrapper[4764]: E0127 07:33:50.794347 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fde588f-3a41-400e-9dd9-42ffb66989db" containerName="mariadb-database-create" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.794363 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fde588f-3a41-400e-9dd9-42ffb66989db" containerName="mariadb-database-create" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.794549 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fde588f-3a41-400e-9dd9-42ffb66989db" containerName="mariadb-database-create" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.795713 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.797879 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvkm\" (UniqueName: \"kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm\") pod \"8fde588f-3a41-400e-9dd9-42ffb66989db\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.797956 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts\") pod \"8fde588f-3a41-400e-9dd9-42ffb66989db\" (UID: \"8fde588f-3a41-400e-9dd9-42ffb66989db\") " Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.798305 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.798786 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fde588f-3a41-400e-9dd9-42ffb66989db" (UID: "8fde588f-3a41-400e-9dd9-42ffb66989db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.803142 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm" (OuterVolumeSpecName: "kube-api-access-9qvkm") pod "8fde588f-3a41-400e-9dd9-42ffb66989db" (UID: "8fde588f-3a41-400e-9dd9-42ffb66989db"). InnerVolumeSpecName "kube-api-access-9qvkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.835807 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.899898 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.899992 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900091 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900181 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900222 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900260 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md775\" (UniqueName: \"kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900341 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvkm\" (UniqueName: \"kubernetes.io/projected/8fde588f-3a41-400e-9dd9-42ffb66989db-kube-api-access-9qvkm\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:50 crc kubenswrapper[4764]: I0127 07:33:50.900355 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fde588f-3a41-400e-9dd9-42ffb66989db-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001487 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001561 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001596 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001618 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001644 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md775\" (UniqueName: \"kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.001672 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.004004 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.004054 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.004185 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.004330 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.004400 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.025861 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md775\" (UniqueName: \"kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775\") pod \"dnsmasq-dns-56c9bc6f5c-vg78f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.148776 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.393635 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f57wb" event={"ID":"8fde588f-3a41-400e-9dd9-42ffb66989db","Type":"ContainerDied","Data":"44ac5aabc07909229e7818ccdd9635c87fbfc88ecd7fe1ac547f5dbf2d171610"} Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.393683 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f57wb" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.393718 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ac5aabc07909229e7818ccdd9635c87fbfc88ecd7fe1ac547f5dbf2d171610" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.396984 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerStarted","Data":"80b4db2d615c7137804c11e40f78cd3d7e109418b465f67337eb4e17fad7013c"} Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.432484 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" podStartSLOduration=4.432455969 podStartE2EDuration="4.432455969s" podCreationTimestamp="2026-01-27 07:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:51.422989681 +0000 UTC m=+1044.018612227" watchObservedRunningTime="2026-01-27 07:33:51.432455969 +0000 UTC m=+1044.028078505" Jan 27 07:33:51 crc kubenswrapper[4764]: I0127 07:33:51.614680 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:33:52 crc kubenswrapper[4764]: I0127 07:33:52.405613 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:52 crc kubenswrapper[4764]: I0127 07:33:52.405591 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="dnsmasq-dns" containerID="cri-o://80b4db2d615c7137804c11e40f78cd3d7e109418b465f67337eb4e17fad7013c" gracePeriod=10 Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.418620 4764 generic.go:334] "Generic (PLEG): container finished" podID="f2172902-7472-4083-9070-86624562cdf5" containerID="80b4db2d615c7137804c11e40f78cd3d7e109418b465f67337eb4e17fad7013c" exitCode=0 Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.418661 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerDied","Data":"80b4db2d615c7137804c11e40f78cd3d7e109418b465f67337eb4e17fad7013c"} Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.763991 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.764058 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.764108 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.764808 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:33:53 crc kubenswrapper[4764]: I0127 07:33:53.764879 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821" gracePeriod=600 Jan 27 07:33:54 crc kubenswrapper[4764]: W0127 07:33:54.053313 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc28a46ad_d379_4ce9_847b_87490923c34f.slice/crio-4c9d189c768f162287a686f31c2a3869141957a956ea19029682ce974e95ffde WatchSource:0}: Error finding container 4c9d189c768f162287a686f31c2a3869141957a956ea19029682ce974e95ffde: Status 404 returned error can't find the container with id 4c9d189c768f162287a686f31c2a3869141957a956ea19029682ce974e95ffde Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.237578 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.267207 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.273311 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.299848 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts\") pod \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.299907 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjg5t\" (UniqueName: \"kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t\") pod \"13f98420-ea6f-40cb-b274-0bf3b3282252\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.299925 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts\") pod \"8c458dd5-a975-4d75-83f4-c58184f63ab2\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.299977 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts\") pod \"13f98420-ea6f-40cb-b274-0bf3b3282252\" (UID: \"13f98420-ea6f-40cb-b274-0bf3b3282252\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.300037 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2p56\" (UniqueName: \"kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56\") pod \"8c458dd5-a975-4d75-83f4-c58184f63ab2\" (UID: \"8c458dd5-a975-4d75-83f4-c58184f63ab2\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.300094 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsjtf\" (UniqueName: \"kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf\") pod \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\" (UID: \"4a7123da-ea98-40d7-bed5-0cdbafc74ca1\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.300949 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c458dd5-a975-4d75-83f4-c58184f63ab2" (UID: "8c458dd5-a975-4d75-83f4-c58184f63ab2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.301323 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a7123da-ea98-40d7-bed5-0cdbafc74ca1" (UID: "4a7123da-ea98-40d7-bed5-0cdbafc74ca1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.301855 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13f98420-ea6f-40cb-b274-0bf3b3282252" (UID: "13f98420-ea6f-40cb-b274-0bf3b3282252"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.303865 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.311259 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf" (OuterVolumeSpecName: "kube-api-access-gsjtf") pod "4a7123da-ea98-40d7-bed5-0cdbafc74ca1" (UID: "4a7123da-ea98-40d7-bed5-0cdbafc74ca1"). InnerVolumeSpecName "kube-api-access-gsjtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.321127 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t" (OuterVolumeSpecName: "kube-api-access-tjg5t") pod "13f98420-ea6f-40cb-b274-0bf3b3282252" (UID: "13f98420-ea6f-40cb-b274-0bf3b3282252"). InnerVolumeSpecName "kube-api-access-tjg5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.348361 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56" (OuterVolumeSpecName: "kube-api-access-s2p56") pod "8c458dd5-a975-4d75-83f4-c58184f63ab2" (UID: "8c458dd5-a975-4d75-83f4-c58184f63ab2"). InnerVolumeSpecName "kube-api-access-s2p56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.402395 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts\") pod \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.402596 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcx8j\" (UniqueName: \"kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j\") pod \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\" (UID: \"23a2109a-990b-4435-a62e-4b4ca7d52c1e\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403065 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23a2109a-990b-4435-a62e-4b4ca7d52c1e" (UID: "23a2109a-990b-4435-a62e-4b4ca7d52c1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403206 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2p56\" (UniqueName: \"kubernetes.io/projected/8c458dd5-a975-4d75-83f4-c58184f63ab2-kube-api-access-s2p56\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403222 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsjtf\" (UniqueName: \"kubernetes.io/projected/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-kube-api-access-gsjtf\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403231 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a7123da-ea98-40d7-bed5-0cdbafc74ca1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403240 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjg5t\" (UniqueName: \"kubernetes.io/projected/13f98420-ea6f-40cb-b274-0bf3b3282252-kube-api-access-tjg5t\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403248 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c458dd5-a975-4d75-83f4-c58184f63ab2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403258 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a2109a-990b-4435-a62e-4b4ca7d52c1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.403268 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f98420-ea6f-40cb-b274-0bf3b3282252-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.405377 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j" (OuterVolumeSpecName: "kube-api-access-gcx8j") pod "23a2109a-990b-4435-a62e-4b4ca7d52c1e" (UID: "23a2109a-990b-4435-a62e-4b4ca7d52c1e"). InnerVolumeSpecName "kube-api-access-gcx8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.427497 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1f49-account-create-update-hdh2b" event={"ID":"8c458dd5-a975-4d75-83f4-c58184f63ab2","Type":"ContainerDied","Data":"edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.427594 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb9fc44af5120eb7a6dde2d2c8a07b0f4f3ee0bec9b7c9463e4a1ae865578e5" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.427530 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1f49-account-create-update-hdh2b" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.428581 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwtw5" event={"ID":"f3e76902-0634-44b5-bb6b-0cdf63efaf87","Type":"ContainerDied","Data":"ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.428622 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef31f69015f1a3b00d3a7c631eba0dda06081b0f12b0650202330c1dcd151ed3" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.430806 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821" exitCode=0 Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.430872 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.430924 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.430941 4764 scope.go:117] "RemoveContainer" containerID="99095ad6a6beeaefac730e02f9f2d74bfc284be02a2e809f2e291e6bcbaaa57e" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.432942 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9636-account-create-update-lj4pb" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.432956 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9636-account-create-update-lj4pb" event={"ID":"4a7123da-ea98-40d7-bed5-0cdbafc74ca1","Type":"ContainerDied","Data":"00f168a1535cb57403dfe1d49a162299a32b66ea48f0128bbabe3af26c2a9b88"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.432978 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f168a1535cb57403dfe1d49a162299a32b66ea48f0128bbabe3af26c2a9b88" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.435985 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-232f-account-create-update-lzptk" event={"ID":"13f98420-ea6f-40cb-b274-0bf3b3282252","Type":"ContainerDied","Data":"b5f622dd604630d47d84321a7dd47a962fb28f5dfb8e5a4a06f3d1b8d9c29c83"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.436009 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f622dd604630d47d84321a7dd47a962fb28f5dfb8e5a4a06f3d1b8d9c29c83" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.436012 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-232f-account-create-update-lzptk" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.437309 4764 generic.go:334] "Generic (PLEG): container finished" podID="c28a46ad-d379-4ce9-847b-87490923c34f" containerID="8291774e31cbc836c549480d278ec07c16c598e28c41d10fa0272291b5f9bc3e" exitCode=0 Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.437356 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" event={"ID":"c28a46ad-d379-4ce9-847b-87490923c34f","Type":"ContainerDied","Data":"8291774e31cbc836c549480d278ec07c16c598e28c41d10fa0272291b5f9bc3e"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.437400 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" event={"ID":"c28a46ad-d379-4ce9-847b-87490923c34f","Type":"ContainerStarted","Data":"4c9d189c768f162287a686f31c2a3869141957a956ea19029682ce974e95ffde"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.444022 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cfvvc" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.478484 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cfvvc" event={"ID":"23a2109a-990b-4435-a62e-4b4ca7d52c1e","Type":"ContainerDied","Data":"44cebcc1cb6c2a734c55aa16457ae3605f783ec607fc95d57fff3e631be07a26"} Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.478529 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cebcc1cb6c2a734c55aa16457ae3605f783ec607fc95d57fff3e631be07a26" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.486206 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.505975 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts\") pod \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.506676 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3e76902-0634-44b5-bb6b-0cdf63efaf87" (UID: "f3e76902-0634-44b5-bb6b-0cdf63efaf87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.506808 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxpl\" (UniqueName: \"kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl\") pod \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\" (UID: \"f3e76902-0634-44b5-bb6b-0cdf63efaf87\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.507599 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e76902-0634-44b5-bb6b-0cdf63efaf87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.507885 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcx8j\" (UniqueName: \"kubernetes.io/projected/23a2109a-990b-4435-a62e-4b4ca7d52c1e-kube-api-access-gcx8j\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.510727 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl" (OuterVolumeSpecName: "kube-api-access-lmxpl") pod "f3e76902-0634-44b5-bb6b-0cdf63efaf87" (UID: "f3e76902-0634-44b5-bb6b-0cdf63efaf87"). InnerVolumeSpecName "kube-api-access-lmxpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.566155 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.608893 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc\") pod \"f2172902-7472-4083-9070-86624562cdf5\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.608966 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb\") pod \"f2172902-7472-4083-9070-86624562cdf5\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.609054 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config\") pod \"f2172902-7472-4083-9070-86624562cdf5\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.609088 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb\") pod \"f2172902-7472-4083-9070-86624562cdf5\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.609116 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntnms\" (UniqueName: \"kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms\") pod \"f2172902-7472-4083-9070-86624562cdf5\" (UID: \"f2172902-7472-4083-9070-86624562cdf5\") " Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.609566 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmxpl\" (UniqueName: \"kubernetes.io/projected/f3e76902-0634-44b5-bb6b-0cdf63efaf87-kube-api-access-lmxpl\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.616317 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms" (OuterVolumeSpecName: "kube-api-access-ntnms") pod "f2172902-7472-4083-9070-86624562cdf5" (UID: "f2172902-7472-4083-9070-86624562cdf5"). InnerVolumeSpecName "kube-api-access-ntnms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.710052 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2172902-7472-4083-9070-86624562cdf5" (UID: "f2172902-7472-4083-9070-86624562cdf5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.710084 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config" (OuterVolumeSpecName: "config") pod "f2172902-7472-4083-9070-86624562cdf5" (UID: "f2172902-7472-4083-9070-86624562cdf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.710178 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntnms\" (UniqueName: \"kubernetes.io/projected/f2172902-7472-4083-9070-86624562cdf5-kube-api-access-ntnms\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.712655 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2172902-7472-4083-9070-86624562cdf5" (UID: "f2172902-7472-4083-9070-86624562cdf5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.721913 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2172902-7472-4083-9070-86624562cdf5" (UID: "f2172902-7472-4083-9070-86624562cdf5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.810870 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.810914 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.810930 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:54 crc kubenswrapper[4764]: I0127 07:33:54.810940 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2172902-7472-4083-9070-86624562cdf5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.453266 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" event={"ID":"f2172902-7472-4083-9070-86624562cdf5","Type":"ContainerDied","Data":"2113a1bc896d2cea23002e5f001437378ff58021cdf419109262ef4bb3b8b610"} Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.453730 4764 scope.go:117] "RemoveContainer" containerID="80b4db2d615c7137804c11e40f78cd3d7e109418b465f67337eb4e17fad7013c" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.453299 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-wbwxx" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.472033 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zgzzx" event={"ID":"1070498a-e8fe-43a6-b6d3-4a2862f24fee","Type":"ContainerStarted","Data":"0144b02fd2128fdb0c9f359ba7e805cecf2120b82774c5b231364d5222b4cd80"} Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.474850 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwtw5" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.475208 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" event={"ID":"c28a46ad-d379-4ce9-847b-87490923c34f","Type":"ContainerStarted","Data":"bb2adc6726c02fc451f551d5a70a837ec669aea2dbe0611222556a9da4b11214"} Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.475315 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.493005 4764 scope.go:117] "RemoveContainer" containerID="9a2cb4d223ee98b2c8079996d42da44b8c7ae505a4e928bcc128518350ec9c96" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.499975 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.508168 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-wbwxx"] Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.523519 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zgzzx" podStartSLOduration=2.986622682 podStartE2EDuration="8.523492968s" podCreationTimestamp="2026-01-27 07:33:47 +0000 UTC" firstStartedPulling="2026-01-27 07:33:48.791887319 +0000 UTC m=+1041.387509835" lastFinishedPulling="2026-01-27 07:33:54.328757595 +0000 UTC m=+1046.924380121" observedRunningTime="2026-01-27 07:33:55.512564232 +0000 UTC m=+1048.108186768" watchObservedRunningTime="2026-01-27 07:33:55.523492968 +0000 UTC m=+1048.119115504" Jan 27 07:33:55 crc kubenswrapper[4764]: I0127 07:33:55.552271 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" podStartSLOduration=5.552206881 podStartE2EDuration="5.552206881s" podCreationTimestamp="2026-01-27 07:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:33:55.533164812 +0000 UTC m=+1048.128787348" watchObservedRunningTime="2026-01-27 07:33:55.552206881 +0000 UTC m=+1048.147829417" Jan 27 07:33:56 crc kubenswrapper[4764]: I0127 07:33:56.450786 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2172902-7472-4083-9070-86624562cdf5" path="/var/lib/kubelet/pods/f2172902-7472-4083-9070-86624562cdf5/volumes" Jan 27 07:33:57 crc kubenswrapper[4764]: I0127 07:33:57.505163 4764 generic.go:334] "Generic (PLEG): container finished" podID="1070498a-e8fe-43a6-b6d3-4a2862f24fee" containerID="0144b02fd2128fdb0c9f359ba7e805cecf2120b82774c5b231364d5222b4cd80" exitCode=0 Jan 27 07:33:57 crc kubenswrapper[4764]: I0127 07:33:57.505206 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zgzzx" event={"ID":"1070498a-e8fe-43a6-b6d3-4a2862f24fee","Type":"ContainerDied","Data":"0144b02fd2128fdb0c9f359ba7e805cecf2120b82774c5b231364d5222b4cd80"} Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.813643 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.880794 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle\") pod \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.880940 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data\") pod \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.881006 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsm9c\" (UniqueName: \"kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c\") pod \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\" (UID: \"1070498a-e8fe-43a6-b6d3-4a2862f24fee\") " Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.912596 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c" (OuterVolumeSpecName: "kube-api-access-vsm9c") pod "1070498a-e8fe-43a6-b6d3-4a2862f24fee" (UID: "1070498a-e8fe-43a6-b6d3-4a2862f24fee"). InnerVolumeSpecName "kube-api-access-vsm9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.924608 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1070498a-e8fe-43a6-b6d3-4a2862f24fee" (UID: "1070498a-e8fe-43a6-b6d3-4a2862f24fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.937274 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data" (OuterVolumeSpecName: "config-data") pod "1070498a-e8fe-43a6-b6d3-4a2862f24fee" (UID: "1070498a-e8fe-43a6-b6d3-4a2862f24fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.983032 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.983055 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1070498a-e8fe-43a6-b6d3-4a2862f24fee-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:58 crc kubenswrapper[4764]: I0127 07:33:58.983065 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsm9c\" (UniqueName: \"kubernetes.io/projected/1070498a-e8fe-43a6-b6d3-4a2862f24fee-kube-api-access-vsm9c\") on node \"crc\" DevicePath \"\"" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.520981 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zgzzx" event={"ID":"1070498a-e8fe-43a6-b6d3-4a2862f24fee","Type":"ContainerDied","Data":"eefccdadfcb514443825e604ff6cda32d997151b1bf51b48401db36ab305c5f9"} Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.521020 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eefccdadfcb514443825e604ff6cda32d997151b1bf51b48401db36ab305c5f9" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.521059 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zgzzx" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813159 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g896j"] Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813492 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f98420-ea6f-40cb-b274-0bf3b3282252" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813509 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f98420-ea6f-40cb-b274-0bf3b3282252" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813522 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e76902-0634-44b5-bb6b-0cdf63efaf87" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813529 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e76902-0634-44b5-bb6b-0cdf63efaf87" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813546 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7123da-ea98-40d7-bed5-0cdbafc74ca1" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813553 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7123da-ea98-40d7-bed5-0cdbafc74ca1" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813565 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a2109a-990b-4435-a62e-4b4ca7d52c1e" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813571 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a2109a-990b-4435-a62e-4b4ca7d52c1e" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813583 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1070498a-e8fe-43a6-b6d3-4a2862f24fee" containerName="keystone-db-sync" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813588 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1070498a-e8fe-43a6-b6d3-4a2862f24fee" containerName="keystone-db-sync" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813599 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="dnsmasq-dns" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813606 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="dnsmasq-dns" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813614 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c458dd5-a975-4d75-83f4-c58184f63ab2" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813620 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c458dd5-a975-4d75-83f4-c58184f63ab2" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: E0127 07:33:59.813630 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="init" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813636 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="init" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813766 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a2109a-990b-4435-a62e-4b4ca7d52c1e" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813774 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2172902-7472-4083-9070-86624562cdf5" containerName="dnsmasq-dns" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813783 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f98420-ea6f-40cb-b274-0bf3b3282252" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813796 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1070498a-e8fe-43a6-b6d3-4a2862f24fee" containerName="keystone-db-sync" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813808 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7123da-ea98-40d7-bed5-0cdbafc74ca1" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813818 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e76902-0634-44b5-bb6b-0cdf63efaf87" containerName="mariadb-database-create" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.813828 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c458dd5-a975-4d75-83f4-c58184f63ab2" containerName="mariadb-account-create-update" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.814304 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.820574 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.820961 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.822509 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.822656 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.822679 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88tdq" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.841506 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.841825 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="dnsmasq-dns" containerID="cri-o://bb2adc6726c02fc451f551d5a70a837ec669aea2dbe0611222556a9da4b11214" gracePeriod=10 Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.843201 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.857005 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g896j"] Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898109 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898262 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898298 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898566 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8j66\" (UniqueName: \"kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.898652 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.924834 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.928518 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:33:59 crc kubenswrapper[4764]: I0127 07:33:59.944875 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010063 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8j66\" (UniqueName: \"kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010225 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010257 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010290 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010347 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010559 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfg2\" (UniqueName: \"kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010743 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010808 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010836 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010945 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010971 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.010992 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.051367 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.073802 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8j66\" (UniqueName: \"kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.074383 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.078299 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.102137 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117269 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117319 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117359 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117379 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117454 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfg2\" (UniqueName: \"kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.117511 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.118334 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.118359 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.118786 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.118981 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.119562 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.171542 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.173553 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys\") pod \"keystone-bootstrap-g896j\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.174561 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.204382 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-6ph72" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.204890 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.205053 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.208765 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.208980 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-g7f7g"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.210255 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.215936 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.216301 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lsfh5" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.216642 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.219951 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220052 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220109 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220137 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zwx\" (UniqueName: \"kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220195 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220214 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.220276 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4dpp\" (UniqueName: \"kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.230399 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.232414 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfg2\" (UniqueName: \"kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2\") pod \"dnsmasq-dns-54b4bb76d5-bbtk2\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.248549 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.255766 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g7f7g"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.273283 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dtl6s"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.274697 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.279842 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.288351 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.288536 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dtl6s"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.289210 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-np27c" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.303347 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.305579 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.323503 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4tp44"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.328884 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.331936 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gg7tp" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.332985 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333026 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333052 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333068 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333090 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333108 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333127 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zwx\" (UniqueName: \"kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333146 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333166 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xff77\" (UniqueName: \"kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333191 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333210 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333237 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4dpp\" (UniqueName: \"kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.333259 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.334249 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.334448 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.334578 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.334909 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.343501 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4tp44"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.345564 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.345873 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.348361 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.369112 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zwx\" (UniqueName: \"kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx\") pod \"horizon-5bf9d6d457-z2pfm\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.413091 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4dpp\" (UniqueName: \"kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp\") pod \"neutron-db-sync-g7f7g\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.432907 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435223 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435259 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435282 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435309 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435329 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435347 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xff77\" (UniqueName: \"kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435399 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6hx\" (UniqueName: \"kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.435543 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.445181 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.445241 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.446515 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.448958 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.461461 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.471482 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xff77\" (UniqueName: \"kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.481801 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle\") pod \"placement-db-sync-dtl6s\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.511060 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.512504 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.517984 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.539101 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.539695 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.539766 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.539945 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540025 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6hx\" (UniqueName: \"kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540062 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540108 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dbr\" (UniqueName: \"kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540189 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540240 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540301 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540354 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540410 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhsr\" (UniqueName: \"kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540504 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.540547 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.550061 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.550390 4764 generic.go:334] "Generic (PLEG): container finished" podID="c28a46ad-d379-4ce9-847b-87490923c34f" containerID="bb2adc6726c02fc451f551d5a70a837ec669aea2dbe0611222556a9da4b11214" exitCode=0 Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.550547 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" event={"ID":"c28a46ad-d379-4ce9-847b-87490923c34f","Type":"ContainerDied","Data":"bb2adc6726c02fc451f551d5a70a837ec669aea2dbe0611222556a9da4b11214"} Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.550662 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.558250 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.566791 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.587853 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6hx\" (UniqueName: \"kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx\") pod \"barbican-db-sync-4tp44\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.603526 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.606410 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.641535 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.643122 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.647712 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.648026 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.648937 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.649497 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gr88t" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651013 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651078 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651121 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651143 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dbr\" (UniqueName: \"kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651178 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651219 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.651275 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.657082 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhsr\" (UniqueName: \"kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.657176 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.652337 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.655748 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.654377 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.658584 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.658946 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.659782 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.654474 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.660525 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.664991 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.665214 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.665626 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7875w"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.668987 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.669158 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.669202 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.669905 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.670526 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.678696 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.681947 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.682239 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.682761 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9t8rn" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.684893 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.686325 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dbr\" (UniqueName: \"kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr\") pod \"horizon-569cdf58df-jw5kj\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.689101 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhsr\" (UniqueName: \"kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr\") pod \"dnsmasq-dns-5dc4fcdbc-9b292\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.708828 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7875w"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.709363 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.723783 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.725629 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.754209 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: E0127 07:34:00.754678 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="dnsmasq-dns" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.754700 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="dnsmasq-dns" Jan 27 07:34:00 crc kubenswrapper[4764]: E0127 07:34:00.754717 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="init" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.754724 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="init" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.754941 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" containerName="dnsmasq-dns" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.756148 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.762620 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.762914 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.769981 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.770662 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.770746 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.770853 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.770970 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771041 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxf4\" (UniqueName: \"kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771124 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771203 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771294 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7khg\" (UniqueName: \"kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771377 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771496 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771583 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771728 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.771812 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.782318 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.805056 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.875820 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876629 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876674 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876733 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876809 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876840 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.876909 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md775\" (UniqueName: \"kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877384 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxf4\" (UniqueName: \"kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877417 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgwvk\" (UniqueName: \"kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877455 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877471 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877489 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877518 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877543 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7khg\" (UniqueName: \"kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877563 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877586 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877628 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877645 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877676 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877691 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.877726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878221 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878244 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878273 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878304 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878327 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878343 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878358 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czpgl\" (UniqueName: \"kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878375 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878394 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878410 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878430 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878460 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878475 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.878917 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.883004 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.885616 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.895670 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.898360 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.899453 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.905701 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.912152 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.912360 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.912637 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775" (OuterVolumeSpecName: "kube-api-access-md775") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "kube-api-access-md775". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.912734 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.913767 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.935913 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxf4\" (UniqueName: \"kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4\") pod \"cinder-db-sync-7875w\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.946313 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980267 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980336 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980385 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czpgl\" (UniqueName: \"kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980407 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980427 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980490 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgwvk\" (UniqueName: \"kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980509 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980527 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980579 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980631 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980652 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980674 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980702 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980716 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980745 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.980845 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md775\" (UniqueName: \"kubernetes.io/projected/c28a46ad-d379-4ce9-847b-87490923c34f-kube-api-access-md775\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.981541 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.982720 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.982795 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.983989 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.984585 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:00 crc kubenswrapper[4764]: I0127 07:34:00.986109 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.002293 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.003956 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.003984 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.009976 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7khg\" (UniqueName: \"kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.022142 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.027289 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.027914 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.048760 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.049600 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.051295 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czpgl\" (UniqueName: \"kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl\") pod \"ceilometer-0\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.072923 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.077471 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.077871 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgwvk\" (UniqueName: \"kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.092752 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config" (OuterVolumeSpecName: "config") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.093969 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") pod \"c28a46ad-d379-4ce9-847b-87490923c34f\" (UID: \"c28a46ad-d379-4ce9-847b-87490923c34f\") " Jan 27 07:34:01 crc kubenswrapper[4764]: W0127 07:34:01.094322 4764 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c28a46ad-d379-4ce9-847b-87490923c34f/volumes/kubernetes.io~configmap/config Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.094339 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config" (OuterVolumeSpecName: "config") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.095571 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.095593 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.117756 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.120608 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.137792 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.153942 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c28a46ad-d379-4ce9-847b-87490923c34f" (UID: "c28a46ad-d379-4ce9-847b-87490923c34f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.188589 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.213143 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.213206 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.213220 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c28a46ad-d379-4ce9-847b-87490923c34f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.247240 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g896j"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.287554 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.330265 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.368340 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.389403 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g7f7g"] Jan 27 07:34:01 crc kubenswrapper[4764]: W0127 07:34:01.400045 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e6ef61_e6b4_4719_ae71_1983696d2d69.slice/crio-24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849 WatchSource:0}: Error finding container 24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849: Status 404 returned error can't find the container with id 24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849 Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.423878 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.614977 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" event={"ID":"c28a46ad-d379-4ce9-847b-87490923c34f","Type":"ContainerDied","Data":"4c9d189c768f162287a686f31c2a3869141957a956ea19029682ce974e95ffde"} Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.615266 4764 scope.go:117] "RemoveContainer" containerID="bb2adc6726c02fc451f551d5a70a837ec669aea2dbe0611222556a9da4b11214" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.615050 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-vg78f" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.619138 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g7f7g" event={"ID":"b3e6ef61-e6b4-4719-ae71-1983696d2d69","Type":"ContainerStarted","Data":"24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849"} Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.622400 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf9d6d457-z2pfm" event={"ID":"ec115988-20b0-4c1d-b09f-803bede49014","Type":"ContainerStarted","Data":"69f827e86bf12ea9ecb89a0ce57e9672c782ac0634fb8463c4f710663030ae08"} Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.629786 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" event={"ID":"6581b572-9921-4ba9-9064-664ca2093d9b","Type":"ContainerStarted","Data":"c072e72c160054202c26fcfd4855df74ed5cf7ff80c86241eada9100679b0967"} Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.634124 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g896j" event={"ID":"04b38cb9-d915-4d86-af3f-1fccb514587f","Type":"ContainerStarted","Data":"ab48f233021f8166a6815bcde5a5d59d14a86f9582f1eb9cda0fd1d2d3bf71a9"} Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.642746 4764 scope.go:117] "RemoveContainer" containerID="8291774e31cbc836c549480d278ec07c16c598e28c41d10fa0272291b5f9bc3e" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.681451 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.690655 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-vg78f"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.723555 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.747709 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.800265 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.801870 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.858722 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.884165 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dtl6s"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.894519 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4tp44"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.913879 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.948965 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.950366 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.951147 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.951200 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.951855 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ll7l\" (UniqueName: \"kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:01 crc kubenswrapper[4764]: I0127 07:34:01.951916 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.011775 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.020356 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.057101 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7875w"] Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.058242 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ll7l\" (UniqueName: \"kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.058285 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.058367 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.058428 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.058476 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.078233 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.087789 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.090860 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.092064 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.104708 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ll7l\" (UniqueName: \"kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l\") pod \"horizon-79c4cf44df-7x8nj\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.189553 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.261046 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.354484 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:02 crc kubenswrapper[4764]: W0127 07:34:02.367907 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b16d2c7_ef2f_4198_947f_f688d3018a26.slice/crio-d495afcb9727bf74d42c4aa276098fe895cfd35b30d3ed672235415330344231 WatchSource:0}: Error finding container d495afcb9727bf74d42c4aa276098fe895cfd35b30d3ed672235415330344231: Status 404 returned error can't find the container with id d495afcb9727bf74d42c4aa276098fe895cfd35b30d3ed672235415330344231 Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.465617 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28a46ad-d379-4ce9-847b-87490923c34f" path="/var/lib/kubelet/pods/c28a46ad-d379-4ce9-847b-87490923c34f/volumes" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.474151 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.643116 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g896j" event={"ID":"04b38cb9-d915-4d86-af3f-1fccb514587f","Type":"ContainerStarted","Data":"d6889f54e291162dba7902a4ed27ed0fcc11a596c4d7b19375b415ed8a2be5ee"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.656896 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerStarted","Data":"60c2064ac0be36fc3753433c2e6b9710e546973e05fa4d100ea302ee13ec240a"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.664076 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtl6s" event={"ID":"77beac3a-985b-45d4-b804-ff2926d7ab7d","Type":"ContainerStarted","Data":"2cf61e4a76bdd8384c44bed5ad2136ec0cf9821bca172ed6747035476b30a099"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.666624 4764 generic.go:334] "Generic (PLEG): container finished" podID="6581b572-9921-4ba9-9064-664ca2093d9b" containerID="37c5ba2c0d0ee5312920553999216971305df00b80b1587eee73ce5e2ed98d29" exitCode=0 Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.666721 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" event={"ID":"6581b572-9921-4ba9-9064-664ca2093d9b","Type":"ContainerDied","Data":"37c5ba2c0d0ee5312920553999216971305df00b80b1587eee73ce5e2ed98d29"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.669133 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7875w" event={"ID":"64c3a86b-6e48-4aa4-950e-d8ecf643cf48","Type":"ContainerStarted","Data":"ed12c56bd8975c67c470bd8a4c68a399075013e263822bf50f7cc6af593b72bf"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.669649 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g896j" podStartSLOduration=3.669639935 podStartE2EDuration="3.669639935s" podCreationTimestamp="2026-01-27 07:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:02.661667296 +0000 UTC m=+1055.257289822" watchObservedRunningTime="2026-01-27 07:34:02.669639935 +0000 UTC m=+1055.265262461" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.671624 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569cdf58df-jw5kj" event={"ID":"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f","Type":"ContainerStarted","Data":"2a3164fed0ca66ac8b422a48d3a78ad951d55b94b3d69fce25f0434a6e83d73e"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.674264 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerStarted","Data":"d495afcb9727bf74d42c4aa276098fe895cfd35b30d3ed672235415330344231"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.675657 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerStarted","Data":"7ba0fabdf694d952b7406fa2d89189f4c901414055b6454c5b845efb5268e354"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.677503 4764 generic.go:334] "Generic (PLEG): container finished" podID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerID="28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be" exitCode=0 Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.677597 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" event={"ID":"97bf0eb8-706c-4461-92a1-6629d0c48905","Type":"ContainerDied","Data":"28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.677623 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" event={"ID":"97bf0eb8-706c-4461-92a1-6629d0c48905","Type":"ContainerStarted","Data":"9de2b491cf60b10f195dbe127fb1c67a4c8262b3484c0dc67b4ad3761378b842"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.685159 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g7f7g" event={"ID":"b3e6ef61-e6b4-4719-ae71-1983696d2d69","Type":"ContainerStarted","Data":"032f9b05639300c211df9ab39967f53ea3beaa71a440612d1afd5cd03f76ebad"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.690405 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4tp44" event={"ID":"7cfdc388-3353-43e2-99b0-7c6e17fb78f9","Type":"ContainerStarted","Data":"ec9fc1a17a76b8cac95708f63cb8a5b9a38ad63d2ec4b176713ecf1c4c585d7c"} Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.707557 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-g7f7g" podStartSLOduration=2.707538529 podStartE2EDuration="2.707538529s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:02.701927672 +0000 UTC m=+1055.297550208" watchObservedRunningTime="2026-01-27 07:34:02.707538529 +0000 UTC m=+1055.303161055" Jan 27 07:34:02 crc kubenswrapper[4764]: I0127 07:34:02.817374 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.272901 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.387809 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.388241 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.388380 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfg2\" (UniqueName: \"kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.388414 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.388448 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.388487 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb\") pod \"6581b572-9921-4ba9-9064-664ca2093d9b\" (UID: \"6581b572-9921-4ba9-9064-664ca2093d9b\") " Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.393672 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2" (OuterVolumeSpecName: "kube-api-access-6bfg2") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "kube-api-access-6bfg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.413834 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.416864 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.417031 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config" (OuterVolumeSpecName: "config") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.420426 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.434361 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6581b572-9921-4ba9-9064-664ca2093d9b" (UID: "6581b572-9921-4ba9-9064-664ca2093d9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490659 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bfg2\" (UniqueName: \"kubernetes.io/projected/6581b572-9921-4ba9-9064-664ca2093d9b-kube-api-access-6bfg2\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490737 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490749 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490757 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490765 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.490774 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6581b572-9921-4ba9-9064-664ca2093d9b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.755758 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerStarted","Data":"f34b20524afae895db21235bb2011c9700c5edf74ecc839e9a0c2bc527b60495"} Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.758590 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerStarted","Data":"0a896c3524bf53878abf2ab1fbe0f43a4e5b3e22bd7271df0a2a83f1a56c188b"} Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.761052 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" event={"ID":"97bf0eb8-706c-4461-92a1-6629d0c48905","Type":"ContainerStarted","Data":"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef"} Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.761275 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.774821 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" event={"ID":"6581b572-9921-4ba9-9064-664ca2093d9b","Type":"ContainerDied","Data":"c072e72c160054202c26fcfd4855df74ed5cf7ff80c86241eada9100679b0967"} Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.774876 4764 scope.go:117] "RemoveContainer" containerID="37c5ba2c0d0ee5312920553999216971305df00b80b1587eee73ce5e2ed98d29" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.775026 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-bbtk2" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.786550 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c4cf44df-7x8nj" event={"ID":"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d","Type":"ContainerStarted","Data":"16d92cac6fe05d13f99cfb65221eac06d968d3d555e8a1e0de69551265e3ebfc"} Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.798445 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" podStartSLOduration=3.79841532 podStartE2EDuration="3.79841532s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:03.793806189 +0000 UTC m=+1056.389428715" watchObservedRunningTime="2026-01-27 07:34:03.79841532 +0000 UTC m=+1056.394037846" Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.887670 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:34:03 crc kubenswrapper[4764]: I0127 07:34:03.910597 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-bbtk2"] Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.455222 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6581b572-9921-4ba9-9064-664ca2093d9b" path="/var/lib/kubelet/pods/6581b572-9921-4ba9-9064-664ca2093d9b/volumes" Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.802186 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerStarted","Data":"7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0"} Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.802329 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-log" containerID="cri-o://f34b20524afae895db21235bb2011c9700c5edf74ecc839e9a0c2bc527b60495" gracePeriod=30 Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.802754 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-httpd" containerID="cri-o://7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0" gracePeriod=30 Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.806545 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerStarted","Data":"55294ffaec0925cd73be29151ff9f1f6ed13901b4a63c3782d3d188b78054d98"} Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.807005 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-log" containerID="cri-o://0a896c3524bf53878abf2ab1fbe0f43a4e5b3e22bd7271df0a2a83f1a56c188b" gracePeriod=30 Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.807044 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-httpd" containerID="cri-o://55294ffaec0925cd73be29151ff9f1f6ed13901b4a63c3782d3d188b78054d98" gracePeriod=30 Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.822153 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.82213792 podStartE2EDuration="4.82213792s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:04.820648981 +0000 UTC m=+1057.416271507" watchObservedRunningTime="2026-01-27 07:34:04.82213792 +0000 UTC m=+1057.417760446" Jan 27 07:34:04 crc kubenswrapper[4764]: I0127 07:34:04.851634 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.851608413 podStartE2EDuration="4.851608413s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:04.850555585 +0000 UTC m=+1057.446178131" watchObservedRunningTime="2026-01-27 07:34:04.851608413 +0000 UTC m=+1057.447230949" Jan 27 07:34:05 crc kubenswrapper[4764]: E0127 07:34:05.144958 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b9fb048_8078_42de_846a_a7a77ac34d05.slice/crio-7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831466 4764 generic.go:334] "Generic (PLEG): container finished" podID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerID="7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0" exitCode=0 Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831502 4764 generic.go:334] "Generic (PLEG): container finished" podID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerID="f34b20524afae895db21235bb2011c9700c5edf74ecc839e9a0c2bc527b60495" exitCode=143 Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831563 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerDied","Data":"7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0"} Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831594 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerDied","Data":"f34b20524afae895db21235bb2011c9700c5edf74ecc839e9a0c2bc527b60495"} Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831606 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b9fb048-8078-42de-846a-a7a77ac34d05","Type":"ContainerDied","Data":"60c2064ac0be36fc3753433c2e6b9710e546973e05fa4d100ea302ee13ec240a"} Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.831618 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c2064ac0be36fc3753433c2e6b9710e546973e05fa4d100ea302ee13ec240a" Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.852348 4764 generic.go:334] "Generic (PLEG): container finished" podID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerID="55294ffaec0925cd73be29151ff9f1f6ed13901b4a63c3782d3d188b78054d98" exitCode=0 Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.852374 4764 generic.go:334] "Generic (PLEG): container finished" podID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerID="0a896c3524bf53878abf2ab1fbe0f43a4e5b3e22bd7271df0a2a83f1a56c188b" exitCode=143 Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.852393 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerDied","Data":"55294ffaec0925cd73be29151ff9f1f6ed13901b4a63c3782d3d188b78054d98"} Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.852417 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerDied","Data":"0a896c3524bf53878abf2ab1fbe0f43a4e5b3e22bd7271df0a2a83f1a56c188b"} Jan 27 07:34:05 crc kubenswrapper[4764]: I0127 07:34:05.899325 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.020078 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050510 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgwvk\" (UniqueName: \"kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050658 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050745 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050822 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050868 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.050926 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.051000 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.051054 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle\") pod \"1b9fb048-8078-42de-846a-a7a77ac34d05\" (UID: \"1b9fb048-8078-42de-846a-a7a77ac34d05\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.052799 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.053379 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs" (OuterVolumeSpecName: "logs") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.056557 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk" (OuterVolumeSpecName: "kube-api-access-zgwvk") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "kube-api-access-zgwvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.058425 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.058795 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts" (OuterVolumeSpecName: "scripts") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.097337 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.124111 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data" (OuterVolumeSpecName: "config-data") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.139864 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1b9fb048-8078-42de-846a-a7a77ac34d05" (UID: "1b9fb048-8078-42de-846a-a7a77ac34d05"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152106 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152366 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152590 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152721 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152805 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.152962 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7khg\" (UniqueName: \"kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.153030 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.153138 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run\") pod \"9b16d2c7-ef2f-4198-947f-f688d3018a26\" (UID: \"9b16d2c7-ef2f-4198-947f-f688d3018a26\") " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.154249 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs" (OuterVolumeSpecName: "logs") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.154841 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155662 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155714 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155732 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155749 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgwvk\" (UniqueName: \"kubernetes.io/projected/1b9fb048-8078-42de-846a-a7a77ac34d05-kube-api-access-zgwvk\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155759 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b9fb048-8078-42de-846a-a7a77ac34d05-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155788 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155798 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155827 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155839 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b16d2c7-ef2f-4198-947f-f688d3018a26-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.155866 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9fb048-8078-42de-846a-a7a77ac34d05-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.157494 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts" (OuterVolumeSpecName: "scripts") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.157784 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.158760 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg" (OuterVolumeSpecName: "kube-api-access-v7khg") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "kube-api-access-v7khg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.187569 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.199344 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.220641 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.229614 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data" (OuterVolumeSpecName: "config-data") pod "9b16d2c7-ef2f-4198-947f-f688d3018a26" (UID: "9b16d2c7-ef2f-4198-947f-f688d3018a26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258241 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258284 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258300 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258315 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b16d2c7-ef2f-4198-947f-f688d3018a26-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258327 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7khg\" (UniqueName: \"kubernetes.io/projected/9b16d2c7-ef2f-4198-947f-f688d3018a26-kube-api-access-v7khg\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258373 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.258386 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.277612 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.360362 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.879836 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.879859 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.879949 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b16d2c7-ef2f-4198-947f-f688d3018a26","Type":"ContainerDied","Data":"d495afcb9727bf74d42c4aa276098fe895cfd35b30d3ed672235415330344231"} Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.880014 4764 scope.go:117] "RemoveContainer" containerID="55294ffaec0925cd73be29151ff9f1f6ed13901b4a63c3782d3d188b78054d98" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.920265 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.934951 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.950504 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.962557 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.973718 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: E0127 07:34:06.974431 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974471 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: E0127 07:34:06.974527 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6581b572-9921-4ba9-9064-664ca2093d9b" containerName="init" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974534 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6581b572-9921-4ba9-9064-664ca2093d9b" containerName="init" Jan 27 07:34:06 crc kubenswrapper[4764]: E0127 07:34:06.974545 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974553 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: E0127 07:34:06.974568 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974575 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: E0127 07:34:06.974582 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974595 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974854 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6581b572-9921-4ba9-9064-664ca2093d9b" containerName="init" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974873 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974882 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974895 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-httpd" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.974909 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" containerName="glance-log" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.976304 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.979820 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.980055 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gr88t" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.980174 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.984431 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.986509 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.992987 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.994777 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:06 crc kubenswrapper[4764]: I0127 07:34:06.998801 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:06.999762 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.016333 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083130 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083191 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083245 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083276 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083298 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083315 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083340 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgj2\" (UniqueName: \"kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083359 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083400 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083422 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083507 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94g64\" (UniqueName: \"kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083550 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083602 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083642 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083679 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.083699 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.185728 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.185779 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.185834 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.186204 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187204 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.185855 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187318 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187349 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187396 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187819 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187864 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgj2\" (UniqueName: \"kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187873 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.187890 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188064 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188111 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188315 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94g64\" (UniqueName: \"kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188379 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188462 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188500 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188609 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.188880 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.189008 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.202136 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.203116 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.205398 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.206576 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.207214 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.218199 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.218692 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94g64\" (UniqueName: \"kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.218750 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.221868 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgj2\" (UniqueName: \"kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.222134 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.235794 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.245451 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.314699 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.346311 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.897929 4764 generic.go:334] "Generic (PLEG): container finished" podID="04b38cb9-d915-4d86-af3f-1fccb514587f" containerID="d6889f54e291162dba7902a4ed27ed0fcc11a596c4d7b19375b415ed8a2be5ee" exitCode=0 Jan 27 07:34:07 crc kubenswrapper[4764]: I0127 07:34:07.897975 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g896j" event={"ID":"04b38cb9-d915-4d86-af3f-1fccb514587f","Type":"ContainerDied","Data":"d6889f54e291162dba7902a4ed27ed0fcc11a596c4d7b19375b415ed8a2be5ee"} Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.206246 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.258715 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.263341 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.270201 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.272385 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.278279 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.309612 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27lz\" (UniqueName: \"kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.309923 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.310026 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.310160 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.310259 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.310349 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.310429 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.339513 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.355839 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5ff9bfcff8-v9nrc"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.358946 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.406207 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.412834 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.412885 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.412911 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.412983 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-secret-key\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413004 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-tls-certs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413027 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27lz\" (UniqueName: \"kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413049 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-scripts\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413066 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acbb02d-c98c-4b45-bc09-dd13fe383502-logs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413086 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413110 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413130 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z994\" (UniqueName: \"kubernetes.io/projected/4acbb02d-c98c-4b45-bc09-dd13fe383502-kube-api-access-6z994\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413159 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-combined-ca-bundle\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413180 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-config-data\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.413227 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.414625 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.414968 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.417314 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5ff9bfcff8-v9nrc"] Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.418211 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.426245 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.428869 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.437581 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27lz\" (UniqueName: \"kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.465428 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs\") pod \"horizon-68d75bdb9d-z5cr4\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.493236 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9fb048-8078-42de-846a-a7a77ac34d05" path="/var/lib/kubelet/pods/1b9fb048-8078-42de-846a-a7a77ac34d05/volumes" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.494557 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b16d2c7-ef2f-4198-947f-f688d3018a26" path="/var/lib/kubelet/pods/9b16d2c7-ef2f-4198-947f-f688d3018a26/volumes" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.514595 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-scripts\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.514979 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acbb02d-c98c-4b45-bc09-dd13fe383502-logs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.515180 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z994\" (UniqueName: \"kubernetes.io/projected/4acbb02d-c98c-4b45-bc09-dd13fe383502-kube-api-access-6z994\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.515319 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-combined-ca-bundle\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.516046 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-config-data\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.516275 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-secret-key\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.516361 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-tls-certs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.516430 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acbb02d-c98c-4b45-bc09-dd13fe383502-logs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.516912 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-scripts\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.517752 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acbb02d-c98c-4b45-bc09-dd13fe383502-config-data\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.518944 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-combined-ca-bundle\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.519963 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-tls-certs\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.520739 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acbb02d-c98c-4b45-bc09-dd13fe383502-horizon-secret-key\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.542065 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z994\" (UniqueName: \"kubernetes.io/projected/4acbb02d-c98c-4b45-bc09-dd13fe383502-kube-api-access-6z994\") pod \"horizon-5ff9bfcff8-v9nrc\" (UID: \"4acbb02d-c98c-4b45-bc09-dd13fe383502\") " pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.598925 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.688873 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:08 crc kubenswrapper[4764]: I0127 07:34:08.952872 4764 scope.go:117] "RemoveContainer" containerID="0a896c3524bf53878abf2ab1fbe0f43a4e5b3e22bd7271df0a2a83f1a56c188b" Jan 27 07:34:10 crc kubenswrapper[4764]: I0127 07:34:10.807803 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:10 crc kubenswrapper[4764]: I0127 07:34:10.860472 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:34:10 crc kubenswrapper[4764]: I0127 07:34:10.860719 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" containerID="cri-o://e2a1bfed2ec6463d5f9309e543437ff3ec41f327d94fe55e855476d16d032c41" gracePeriod=10 Jan 27 07:34:11 crc kubenswrapper[4764]: I0127 07:34:11.943394 4764 generic.go:334] "Generic (PLEG): container finished" podID="4799479c-9f58-454a-be5a-1c00be138aa4" containerID="e2a1bfed2ec6463d5f9309e543437ff3ec41f327d94fe55e855476d16d032c41" exitCode=0 Jan 27 07:34:11 crc kubenswrapper[4764]: I0127 07:34:11.943477 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" event={"ID":"4799479c-9f58-454a-be5a-1c00be138aa4","Type":"ContainerDied","Data":"e2a1bfed2ec6463d5f9309e543437ff3ec41f327d94fe55e855476d16d032c41"} Jan 27 07:34:13 crc kubenswrapper[4764]: I0127 07:34:13.866904 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 27 07:34:18 crc kubenswrapper[4764]: E0127 07:34:18.706061 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 07:34:18 crc kubenswrapper[4764]: E0127 07:34:18.707044 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fh68ch7ch674h557hd4hb6h664h64ch57fhcch6fhc8h5f4hd7h94h5cch67bhb4h5dh64bhfdh9fhfdh596hbfhfbh5cbh7ch586h9h575q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2zwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5bf9d6d457-z2pfm_openstack(ec115988-20b0-4c1d-b09f-803bede49014): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:18 crc kubenswrapper[4764]: E0127 07:34:18.721118 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-5bf9d6d457-z2pfm" podUID="ec115988-20b0-4c1d-b09f-803bede49014" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.842414 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.865394 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.975885 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.975989 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.976025 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8j66\" (UniqueName: \"kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.976114 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.976144 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.976241 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle\") pod \"04b38cb9-d915-4d86-af3f-1fccb514587f\" (UID: \"04b38cb9-d915-4d86-af3f-1fccb514587f\") " Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.982175 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66" (OuterVolumeSpecName: "kube-api-access-q8j66") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "kube-api-access-q8j66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.982302 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.982242 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:18 crc kubenswrapper[4764]: I0127 07:34:18.999153 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts" (OuterVolumeSpecName: "scripts") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.005861 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g896j" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.006249 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g896j" event={"ID":"04b38cb9-d915-4d86-af3f-1fccb514587f","Type":"ContainerDied","Data":"ab48f233021f8166a6815bcde5a5d59d14a86f9582f1eb9cda0fd1d2d3bf71a9"} Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.006306 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab48f233021f8166a6815bcde5a5d59d14a86f9582f1eb9cda0fd1d2d3bf71a9" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.006530 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.021603 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data" (OuterVolumeSpecName: "config-data") pod "04b38cb9-d915-4d86-af3f-1fccb514587f" (UID: "04b38cb9-d915-4d86-af3f-1fccb514587f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078645 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078708 4764 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078717 4764 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078726 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8j66\" (UniqueName: \"kubernetes.io/projected/04b38cb9-d915-4d86-af3f-1fccb514587f-kube-api-access-q8j66\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078737 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.078745 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b38cb9-d915-4d86-af3f-1fccb514587f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.924326 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g896j"] Jan 27 07:34:19 crc kubenswrapper[4764]: I0127 07:34:19.935072 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g896j"] Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.035414 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cz896"] Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.035819 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b38cb9-d915-4d86-af3f-1fccb514587f" containerName="keystone-bootstrap" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.035840 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b38cb9-d915-4d86-af3f-1fccb514587f" containerName="keystone-bootstrap" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.036081 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b38cb9-d915-4d86-af3f-1fccb514587f" containerName="keystone-bootstrap" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.036811 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.038833 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.040744 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.044551 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88tdq" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.044575 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.045521 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.048129 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cz896"] Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204126 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204173 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204194 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204209 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204633 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nc7\" (UniqueName: \"kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.204797 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.306979 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.307074 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.307140 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.307174 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.307206 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.307396 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nc7\" (UniqueName: \"kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.312058 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.312368 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.313392 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.320082 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.322532 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.323653 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nc7\" (UniqueName: \"kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7\") pod \"keystone-bootstrap-cz896\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.356064 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:20 crc kubenswrapper[4764]: I0127 07:34:20.451008 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b38cb9-d915-4d86-af3f-1fccb514587f" path="/var/lib/kubelet/pods/04b38cb9-d915-4d86-af3f-1fccb514587f/volumes" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.512247 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.512420 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n558h5ch599h548h55h5ffh664h8bhbdh5h54fhd9h575h64dh66bh55ch5bfh5b7h5dfh674h687hb6h69h67bh685h77h645h68bh54fh9dh6h595q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ll7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-79c4cf44df-7x8nj_openstack(73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.531423 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-79c4cf44df-7x8nj" podUID="73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.792006 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.792257 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57h689h64ch689h5ch5ddh5ch9fhd8h5fhc4h66ch55bh56chd9h696h8h649h57h76h677h547h56dh557h64bh65h8fh557hf7h569h5ch55dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czpgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(303940fa-42c3-4597-a545-66c946caf680): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.807338 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.807531 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5fdh688h574h697h9fh595hb5h89h86hdch646h9chb7h5bh574h5b6h66dh5fdh567h87hc6h56hf8hdbh58fh8h7dh95h5fh697h95q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89dbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-569cdf58df-jw5kj_openstack(a1ffa484-1a33-47eb-bf0e-17a4a6584c1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:20 crc kubenswrapper[4764]: E0127 07:34:20.809700 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-569cdf58df-jw5kj" podUID="a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" Jan 27 07:34:28 crc kubenswrapper[4764]: E0127 07:34:28.114237 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 27 07:34:28 crc kubenswrapper[4764]: E0127 07:34:28.114937 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq6hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4tp44_openstack(7cfdc388-3353-43e2-99b0-7c6e17fb78f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:28 crc kubenswrapper[4764]: E0127 07:34:28.116165 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4tp44" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.274534 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.297328 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.302866 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.307914 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352480 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb\") pod \"4799479c-9f58-454a-be5a-1c00be138aa4\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352564 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts\") pod \"ec115988-20b0-4c1d-b09f-803bede49014\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352654 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc\") pod \"4799479c-9f58-454a-be5a-1c00be138aa4\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352716 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key\") pod \"ec115988-20b0-4c1d-b09f-803bede49014\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352805 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb\") pod \"4799479c-9f58-454a-be5a-1c00be138aa4\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352937 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data\") pod \"ec115988-20b0-4c1d-b09f-803bede49014\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.352958 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2zwx\" (UniqueName: \"kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx\") pod \"ec115988-20b0-4c1d-b09f-803bede49014\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.353032 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config\") pod \"4799479c-9f58-454a-be5a-1c00be138aa4\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.353057 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs\") pod \"ec115988-20b0-4c1d-b09f-803bede49014\" (UID: \"ec115988-20b0-4c1d-b09f-803bede49014\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.353099 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbvsc\" (UniqueName: \"kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc\") pod \"4799479c-9f58-454a-be5a-1c00be138aa4\" (UID: \"4799479c-9f58-454a-be5a-1c00be138aa4\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.359575 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data" (OuterVolumeSpecName: "config-data") pod "ec115988-20b0-4c1d-b09f-803bede49014" (UID: "ec115988-20b0-4c1d-b09f-803bede49014"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.360111 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts" (OuterVolumeSpecName: "scripts") pod "ec115988-20b0-4c1d-b09f-803bede49014" (UID: "ec115988-20b0-4c1d-b09f-803bede49014"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.368234 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs" (OuterVolumeSpecName: "logs") pod "ec115988-20b0-4c1d-b09f-803bede49014" (UID: "ec115988-20b0-4c1d-b09f-803bede49014"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.372861 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc" (OuterVolumeSpecName: "kube-api-access-rbvsc") pod "4799479c-9f58-454a-be5a-1c00be138aa4" (UID: "4799479c-9f58-454a-be5a-1c00be138aa4"). InnerVolumeSpecName "kube-api-access-rbvsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.379817 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ec115988-20b0-4c1d-b09f-803bede49014" (UID: "ec115988-20b0-4c1d-b09f-803bede49014"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.398049 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx" (OuterVolumeSpecName: "kube-api-access-j2zwx") pod "ec115988-20b0-4c1d-b09f-803bede49014" (UID: "ec115988-20b0-4c1d-b09f-803bede49014"). InnerVolumeSpecName "kube-api-access-j2zwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.426649 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config" (OuterVolumeSpecName: "config") pod "4799479c-9f58-454a-be5a-1c00be138aa4" (UID: "4799479c-9f58-454a-be5a-1c00be138aa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.429427 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4799479c-9f58-454a-be5a-1c00be138aa4" (UID: "4799479c-9f58-454a-be5a-1c00be138aa4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454650 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key\") pod \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454707 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts\") pod \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454727 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key\") pod \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454762 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ll7l\" (UniqueName: \"kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l\") pod \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454789 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89dbr\" (UniqueName: \"kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr\") pod \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454857 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data\") pod \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454901 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs\") pod \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454929 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs\") pod \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.454962 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data\") pod \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\" (UID: \"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455035 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts\") pod \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\" (UID: \"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d\") " Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455375 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455387 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2zwx\" (UniqueName: \"kubernetes.io/projected/ec115988-20b0-4c1d-b09f-803bede49014-kube-api-access-j2zwx\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455396 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455404 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec115988-20b0-4c1d-b09f-803bede49014-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455413 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbvsc\" (UniqueName: \"kubernetes.io/projected/4799479c-9f58-454a-be5a-1c00be138aa4-kube-api-access-rbvsc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455421 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec115988-20b0-4c1d-b09f-803bede49014-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455431 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.455452 4764 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec115988-20b0-4c1d-b09f-803bede49014-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.456816 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs" (OuterVolumeSpecName: "logs") pod "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" (UID: "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.456906 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs" (OuterVolumeSpecName: "logs") pod "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" (UID: "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.456985 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data" (OuterVolumeSpecName: "config-data") pod "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" (UID: "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.458635 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts" (OuterVolumeSpecName: "scripts") pod "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" (UID: "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.459036 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr" (OuterVolumeSpecName: "kube-api-access-89dbr") pod "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" (UID: "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f"). InnerVolumeSpecName "kube-api-access-89dbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.459335 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts" (OuterVolumeSpecName: "scripts") pod "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" (UID: "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.461958 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l" (OuterVolumeSpecName: "kube-api-access-9ll7l") pod "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" (UID: "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d"). InnerVolumeSpecName "kube-api-access-9ll7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.462568 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data" (OuterVolumeSpecName: "config-data") pod "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" (UID: "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.464646 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" (UID: "73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.465541 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" (UID: "a1ffa484-1a33-47eb-bf0e-17a4a6584c1f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.468482 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4799479c-9f58-454a-be5a-1c00be138aa4" (UID: "4799479c-9f58-454a-be5a-1c00be138aa4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.469034 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4799479c-9f58-454a-be5a-1c00be138aa4" (UID: "4799479c-9f58-454a-be5a-1c00be138aa4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.552352 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.558081 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.558366 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.558462 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559292 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559343 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559358 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559371 4764 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559418 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559430 4764 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559482 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ll7l\" (UniqueName: \"kubernetes.io/projected/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d-kube-api-access-9ll7l\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559499 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89dbr\" (UniqueName: \"kubernetes.io/projected/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f-kube-api-access-89dbr\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.559512 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4799479c-9f58-454a-be5a-1c00be138aa4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.864790 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 27 07:34:28 crc kubenswrapper[4764]: I0127 07:34:28.864902 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.088147 4764 generic.go:334] "Generic (PLEG): container finished" podID="b3e6ef61-e6b4-4719-ae71-1983696d2d69" containerID="032f9b05639300c211df9ab39967f53ea3beaa71a440612d1afd5cd03f76ebad" exitCode=0 Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.088216 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g7f7g" event={"ID":"b3e6ef61-e6b4-4719-ae71-1983696d2d69","Type":"ContainerDied","Data":"032f9b05639300c211df9ab39967f53ea3beaa71a440612d1afd5cd03f76ebad"} Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.089576 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf9d6d457-z2pfm" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.089599 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf9d6d457-z2pfm" event={"ID":"ec115988-20b0-4c1d-b09f-803bede49014","Type":"ContainerDied","Data":"69f827e86bf12ea9ecb89a0ce57e9672c782ac0634fb8463c4f710663030ae08"} Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.091749 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" event={"ID":"4799479c-9f58-454a-be5a-1c00be138aa4","Type":"ContainerDied","Data":"c09dcc592167e4a95755f612bd01a486efd4a13018175b6d2a2a5f3514667e36"} Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.091782 4764 scope.go:117] "RemoveContainer" containerID="e2a1bfed2ec6463d5f9309e543437ff3ec41f327d94fe55e855476d16d032c41" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.091786 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-gp6p9" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.093939 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c4cf44df-7x8nj" event={"ID":"73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d","Type":"ContainerDied","Data":"16d92cac6fe05d13f99cfb65221eac06d968d3d555e8a1e0de69551265e3ebfc"} Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.093984 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c4cf44df-7x8nj" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.095123 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569cdf58df-jw5kj" event={"ID":"a1ffa484-1a33-47eb-bf0e-17a4a6584c1f","Type":"ContainerDied","Data":"2a3164fed0ca66ac8b422a48d3a78ad951d55b94b3d69fce25f0434a6e83d73e"} Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.095128 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569cdf58df-jw5kj" Jan 27 07:34:29 crc kubenswrapper[4764]: E0127 07:34:29.097014 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-4tp44" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.213624 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.219550 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79c4cf44df-7x8nj"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.227641 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.238122 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-gp6p9"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.255318 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.268731 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-569cdf58df-jw5kj"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.290086 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.298252 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bf9d6d457-z2pfm"] Jan 27 07:34:29 crc kubenswrapper[4764]: W0127 07:34:29.621949 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b44fa92_de90_4956_8425_e184375fddc1.slice/crio-59d79f33e328abf3f1518dd953232239f4beae55ab824c16d478d449226521f1 WatchSource:0}: Error finding container 59d79f33e328abf3f1518dd953232239f4beae55ab824c16d478d449226521f1: Status 404 returned error can't find the container with id 59d79f33e328abf3f1518dd953232239f4beae55ab824c16d478d449226521f1 Jan 27 07:34:29 crc kubenswrapper[4764]: E0127 07:34:29.659086 4764 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 27 07:34:29 crc kubenswrapper[4764]: E0127 07:34:29.659516 4764 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgxf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7875w_openstack(64c3a86b-6e48-4aa4-950e-d8ecf643cf48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:34:29 crc kubenswrapper[4764]: E0127 07:34:29.661630 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7875w" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" Jan 27 07:34:29 crc kubenswrapper[4764]: I0127 07:34:29.916708 4764 scope.go:117] "RemoveContainer" containerID="fff23981e3e4c86d1b53aaf9af75598754565dedc7fc72bffa7fe2848d12c91d" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.052617 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5ff9bfcff8-v9nrc"] Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.118478 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5ff9bfcff8-v9nrc" event={"ID":"4acbb02d-c98c-4b45-bc09-dd13fe383502","Type":"ContainerStarted","Data":"e74067e9a85ff5223a62cb0687b1d90852e9189eb551c28b838a8bcafcc994cf"} Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.127549 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerStarted","Data":"59d79f33e328abf3f1518dd953232239f4beae55ab824c16d478d449226521f1"} Jan 27 07:34:30 crc kubenswrapper[4764]: E0127 07:34:30.130844 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-7875w" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.301311 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.407745 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.476823 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" path="/var/lib/kubelet/pods/4799479c-9f58-454a-be5a-1c00be138aa4/volumes" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.477967 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d" path="/var/lib/kubelet/pods/73a2fa85-3704-4cbd-a8f3-59a6b4e60b0d/volumes" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.478510 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ffa484-1a33-47eb-bf0e-17a4a6584c1f" path="/var/lib/kubelet/pods/a1ffa484-1a33-47eb-bf0e-17a4a6584c1f/volumes" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.479179 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec115988-20b0-4c1d-b09f-803bede49014" path="/var/lib/kubelet/pods/ec115988-20b0-4c1d-b09f-803bede49014/volumes" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.480639 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cz896"] Jan 27 07:34:30 crc kubenswrapper[4764]: W0127 07:34:30.492918 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded1e2f4c_b077_4a04_8edb_4d169e42964e.slice/crio-f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5 WatchSource:0}: Error finding container f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5: Status 404 returned error can't find the container with id f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5 Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.495703 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.515100 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.594700 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle\") pod \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.594836 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4dpp\" (UniqueName: \"kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp\") pod \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.594966 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config\") pod \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\" (UID: \"b3e6ef61-e6b4-4719-ae71-1983696d2d69\") " Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.606618 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp" (OuterVolumeSpecName: "kube-api-access-g4dpp") pod "b3e6ef61-e6b4-4719-ae71-1983696d2d69" (UID: "b3e6ef61-e6b4-4719-ae71-1983696d2d69"). InnerVolumeSpecName "kube-api-access-g4dpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.659388 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config" (OuterVolumeSpecName: "config") pod "b3e6ef61-e6b4-4719-ae71-1983696d2d69" (UID: "b3e6ef61-e6b4-4719-ae71-1983696d2d69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.660175 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3e6ef61-e6b4-4719-ae71-1983696d2d69" (UID: "b3e6ef61-e6b4-4719-ae71-1983696d2d69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.697643 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.697669 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4dpp\" (UniqueName: \"kubernetes.io/projected/b3e6ef61-e6b4-4719-ae71-1983696d2d69-kube-api-access-g4dpp\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:30 crc kubenswrapper[4764]: I0127 07:34:30.697681 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3e6ef61-e6b4-4719-ae71-1983696d2d69-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.152218 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtl6s" event={"ID":"77beac3a-985b-45d4-b804-ff2926d7ab7d","Type":"ContainerStarted","Data":"2947ec70b8ed8382f839405bddbd121b048a970273e77ec5e7c8b5751ece6bb3"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.158290 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cz896" event={"ID":"ed1e2f4c-b077-4a04-8edb-4d169e42964e","Type":"ContainerStarted","Data":"1b0c43f340897be7ab9cd3bb916d6490422d39699247c8ee11e6c854c3d10130"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.158333 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cz896" event={"ID":"ed1e2f4c-b077-4a04-8edb-4d169e42964e","Type":"ContainerStarted","Data":"f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.161545 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerStarted","Data":"8fceea8466a9b71b43b381a540b8debd35f1a337ccced3e7b157abacd7e45483"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.171065 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerStarted","Data":"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.171114 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerStarted","Data":"06c35d1cdebecf432432629a31b1bc2c90deee0238fe6a2449878b75569d501e"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.175403 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g7f7g" event={"ID":"b3e6ef61-e6b4-4719-ae71-1983696d2d69","Type":"ContainerDied","Data":"24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.175462 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24afbff970175b8daa6bad5bc6f034306425a0c78687be491fdf7a28e65b2849" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.175529 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g7f7g" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.219640 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dtl6s" podStartSLOduration=4.986357756 podStartE2EDuration="31.218875617s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="2026-01-27 07:34:01.888977319 +0000 UTC m=+1054.484599845" lastFinishedPulling="2026-01-27 07:34:28.12149518 +0000 UTC m=+1080.717117706" observedRunningTime="2026-01-27 07:34:31.205007683 +0000 UTC m=+1083.800630209" watchObservedRunningTime="2026-01-27 07:34:31.218875617 +0000 UTC m=+1083.814498143" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.228183 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerStarted","Data":"2ff4f7a80c720e638df5f2ea08f124a62ecacfeb4c71747f31563dc19462b7d1"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.228238 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerStarted","Data":"32c74ed48be94f1d34ee29820f24c619745a99b070df315eb300cd01557d6cd5"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.240720 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cz896" podStartSLOduration=11.240704839 podStartE2EDuration="11.240704839s" podCreationTimestamp="2026-01-27 07:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:31.2357825 +0000 UTC m=+1083.831405016" watchObservedRunningTime="2026-01-27 07:34:31.240704839 +0000 UTC m=+1083.836327365" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.262138 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerStarted","Data":"98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.262201 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerStarted","Data":"8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.276055 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5ff9bfcff8-v9nrc" event={"ID":"4acbb02d-c98c-4b45-bc09-dd13fe383502","Type":"ContainerStarted","Data":"ffb0486fdc666631449e7ddbe3a0956758c820f4abbc9521d6f1cf7714f2a66b"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.276112 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5ff9bfcff8-v9nrc" event={"ID":"4acbb02d-c98c-4b45-bc09-dd13fe383502","Type":"ContainerStarted","Data":"c62b393acd419679c1362be13fcf93e0e3ec6fcf7190c561e577fd701462ef07"} Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.304682 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68d75bdb9d-z5cr4" podStartSLOduration=22.798750771 podStartE2EDuration="23.304661726s" podCreationTimestamp="2026-01-27 07:34:08 +0000 UTC" firstStartedPulling="2026-01-27 07:34:29.624126146 +0000 UTC m=+1082.219748672" lastFinishedPulling="2026-01-27 07:34:30.130037101 +0000 UTC m=+1082.725659627" observedRunningTime="2026-01-27 07:34:31.300769944 +0000 UTC m=+1083.896392460" watchObservedRunningTime="2026-01-27 07:34:31.304661726 +0000 UTC m=+1083.900284262" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338019 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:34:31 crc kubenswrapper[4764]: E0127 07:34:31.338380 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="init" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338397 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="init" Jan 27 07:34:31 crc kubenswrapper[4764]: E0127 07:34:31.338415 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338421 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" Jan 27 07:34:31 crc kubenswrapper[4764]: E0127 07:34:31.338450 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e6ef61-e6b4-4719-ae71-1983696d2d69" containerName="neutron-db-sync" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338457 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e6ef61-e6b4-4719-ae71-1983696d2d69" containerName="neutron-db-sync" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338600 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e6ef61-e6b4-4719-ae71-1983696d2d69" containerName="neutron-db-sync" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.338614 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4799479c-9f58-454a-be5a-1c00be138aa4" containerName="dnsmasq-dns" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.339411 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.349867 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.367365 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5ff9bfcff8-v9nrc" podStartSLOduration=22.879783726 podStartE2EDuration="23.367339559s" podCreationTimestamp="2026-01-27 07:34:08 +0000 UTC" firstStartedPulling="2026-01-27 07:34:30.071384613 +0000 UTC m=+1082.667007139" lastFinishedPulling="2026-01-27 07:34:30.558940446 +0000 UTC m=+1083.154562972" observedRunningTime="2026-01-27 07:34:31.346581565 +0000 UTC m=+1083.942204081" watchObservedRunningTime="2026-01-27 07:34:31.367339559 +0000 UTC m=+1083.962962085" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.405140 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.407047 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.413486 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.413687 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lsfh5" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.413831 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.414220 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.434654 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446635 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446708 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxqx\" (UniqueName: \"kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446737 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446760 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446777 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446795 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446812 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446852 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446885 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs98\" (UniqueName: \"kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446913 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.446937 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.549939 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxqx\" (UniqueName: \"kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550001 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550025 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550040 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550059 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550076 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550121 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550157 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs98\" (UniqueName: \"kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550186 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550212 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.550247 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.551505 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.551845 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.552160 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.552264 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.553191 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.556510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.561017 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.568230 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.573104 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.574100 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs98\" (UniqueName: \"kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98\") pod \"neutron-86db767f96-qw88w\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.573637 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxqx\" (UniqueName: \"kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx\") pod \"dnsmasq-dns-6b9c8b59c-8pkhm\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.692904 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:31 crc kubenswrapper[4764]: I0127 07:34:31.737016 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:32 crc kubenswrapper[4764]: I0127 07:34:32.294292 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:34:32 crc kubenswrapper[4764]: I0127 07:34:32.554395 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.294459 4764 generic.go:334] "Generic (PLEG): container finished" podID="77beac3a-985b-45d4-b804-ff2926d7ab7d" containerID="2947ec70b8ed8382f839405bddbd121b048a970273e77ec5e7c8b5751ece6bb3" exitCode=0 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.294884 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtl6s" event={"ID":"77beac3a-985b-45d4-b804-ff2926d7ab7d","Type":"ContainerDied","Data":"2947ec70b8ed8382f839405bddbd121b048a970273e77ec5e7c8b5751ece6bb3"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.297060 4764 generic.go:334] "Generic (PLEG): container finished" podID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerID="940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d" exitCode=0 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.297117 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" event={"ID":"7ae4de8f-7af4-459b-ab48-6096fbadfe67","Type":"ContainerDied","Data":"940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.297141 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" event={"ID":"7ae4de8f-7af4-459b-ab48-6096fbadfe67","Type":"ContainerStarted","Data":"dbd2e875ceb23a203614b0d3ca661affc9d3cb821f8efb2a238ba651c489aeff"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.302508 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerStarted","Data":"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.302633 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-log" containerID="cri-o://d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" gracePeriod=30 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.302714 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-httpd" containerID="cri-o://bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" gracePeriod=30 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.308666 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerStarted","Data":"7eee264c2b4c3cf8241df972dc54f7dc10315bf28cc242bd49f90a9e5e8f55b8"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.309023 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-log" containerID="cri-o://2ff4f7a80c720e638df5f2ea08f124a62ecacfeb4c71747f31563dc19462b7d1" gracePeriod=30 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.309113 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-httpd" containerID="cri-o://7eee264c2b4c3cf8241df972dc54f7dc10315bf28cc242bd49f90a9e5e8f55b8" gracePeriod=30 Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.319146 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerStarted","Data":"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.319182 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerStarted","Data":"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.319209 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerStarted","Data":"044a16bf61578c312ee93c2d450782feac072c87d9f968715be8511ca3d87ae4"} Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.319890 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.337536 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.337522204 podStartE2EDuration="27.337522204s" podCreationTimestamp="2026-01-27 07:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:33.334099474 +0000 UTC m=+1085.929722000" watchObservedRunningTime="2026-01-27 07:34:33.337522204 +0000 UTC m=+1085.933144730" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.391757 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.391742515 podStartE2EDuration="27.391742515s" podCreationTimestamp="2026-01-27 07:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:33.385781749 +0000 UTC m=+1085.981404295" watchObservedRunningTime="2026-01-27 07:34:33.391742515 +0000 UTC m=+1085.987365031" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.405324 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86db767f96-qw88w" podStartSLOduration=2.405309791 podStartE2EDuration="2.405309791s" podCreationTimestamp="2026-01-27 07:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:33.404980673 +0000 UTC m=+1086.000603199" watchObservedRunningTime="2026-01-27 07:34:33.405309791 +0000 UTC m=+1086.000932317" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.582711 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.584981 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.588208 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.592753 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.607877 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712462 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712556 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712589 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712608 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712659 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb4dt\" (UniqueName: \"kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712691 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.712716 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814553 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb4dt\" (UniqueName: \"kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814853 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814884 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814908 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814965 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.814994 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.815012 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.822250 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.823064 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.823264 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.825107 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.827198 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.829208 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.839170 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb4dt\" (UniqueName: \"kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt\") pod \"neutron-76bbb58569-zwwt9\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:33 crc kubenswrapper[4764]: I0127 07:34:33.943879 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.014520 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.122889 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqgj2\" (UniqueName: \"kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.122932 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123030 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123058 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123073 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123113 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123154 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.123193 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs\") pod \"b864e103-0adf-4f21-8614-1f15c364feba\" (UID: \"b864e103-0adf-4f21-8614-1f15c364feba\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.124138 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs" (OuterVolumeSpecName: "logs") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.124878 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.132730 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.144722 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts" (OuterVolumeSpecName: "scripts") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.145600 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2" (OuterVolumeSpecName: "kube-api-access-nqgj2") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "kube-api-access-nqgj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.189715 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.226944 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqgj2\" (UniqueName: \"kubernetes.io/projected/b864e103-0adf-4f21-8614-1f15c364feba-kube-api-access-nqgj2\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.226976 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.226985 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.226996 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b864e103-0adf-4f21-8614-1f15c364feba-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.227004 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.227027 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.248208 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.255102 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data" (OuterVolumeSpecName: "config-data") pod "b864e103-0adf-4f21-8614-1f15c364feba" (UID: "b864e103-0adf-4f21-8614-1f15c364feba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.255494 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.328782 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.328832 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.328843 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b864e103-0adf-4f21-8614-1f15c364feba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.348688 4764 generic.go:334] "Generic (PLEG): container finished" podID="b864e103-0adf-4f21-8614-1f15c364feba" containerID="bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" exitCode=0 Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.348716 4764 generic.go:334] "Generic (PLEG): container finished" podID="b864e103-0adf-4f21-8614-1f15c364feba" containerID="d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" exitCode=143 Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.348865 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.348863 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerDied","Data":"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.349019 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerDied","Data":"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.349168 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b864e103-0adf-4f21-8614-1f15c364feba","Type":"ContainerDied","Data":"06c35d1cdebecf432432629a31b1bc2c90deee0238fe6a2449878b75569d501e"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.349226 4764 scope.go:117] "RemoveContainer" containerID="bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354543 4764 generic.go:334] "Generic (PLEG): container finished" podID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerID="7eee264c2b4c3cf8241df972dc54f7dc10315bf28cc242bd49f90a9e5e8f55b8" exitCode=0 Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354576 4764 generic.go:334] "Generic (PLEG): container finished" podID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerID="2ff4f7a80c720e638df5f2ea08f124a62ecacfeb4c71747f31563dc19462b7d1" exitCode=143 Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354630 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerDied","Data":"7eee264c2b4c3cf8241df972dc54f7dc10315bf28cc242bd49f90a9e5e8f55b8"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354657 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerDied","Data":"2ff4f7a80c720e638df5f2ea08f124a62ecacfeb4c71747f31563dc19462b7d1"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354670 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5345db9d-96c7-4d88-a071-ba0995678ce6","Type":"ContainerDied","Data":"32c74ed48be94f1d34ee29820f24c619745a99b070df315eb300cd01557d6cd5"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.354679 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c74ed48be94f1d34ee29820f24c619745a99b070df315eb300cd01557d6cd5" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.359492 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" event={"ID":"7ae4de8f-7af4-459b-ab48-6096fbadfe67","Type":"ContainerStarted","Data":"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a"} Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.380355 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" podStartSLOduration=3.380334854 podStartE2EDuration="3.380334854s" podCreationTimestamp="2026-01-27 07:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:34.377764706 +0000 UTC m=+1086.973387242" watchObservedRunningTime="2026-01-27 07:34:34.380334854 +0000 UTC m=+1086.975957380" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.381481 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.427669 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.429762 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.429867 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94g64\" (UniqueName: \"kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.429907 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.429953 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.429984 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.430012 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.430061 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.430077 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle\") pod \"5345db9d-96c7-4d88-a071-ba0995678ce6\" (UID: \"5345db9d-96c7-4d88-a071-ba0995678ce6\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.432937 4764 scope.go:117] "RemoveContainer" containerID="d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.434379 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.436249 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs" (OuterVolumeSpecName: "logs") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.440151 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts" (OuterVolumeSpecName: "scripts") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.444351 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64" (OuterVolumeSpecName: "kube-api-access-94g64") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "kube-api-access-94g64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.444459 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.488167 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.524498 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.524666 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.525796 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.525818 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.525842 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.525850 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.525866 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.525872 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.525880 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.525886 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.526191 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.526222 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-httpd" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.526236 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.526254 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b864e103-0adf-4f21-8614-1f15c364feba" containerName="glance-log" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.527666 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.530370 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.534895 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.537153 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.537168 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.537177 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.537191 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94g64\" (UniqueName: \"kubernetes.io/projected/5345db9d-96c7-4d88-a071-ba0995678ce6-kube-api-access-94g64\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.532946 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.537202 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5345db9d-96c7-4d88-a071-ba0995678ce6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.535788 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.541149 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.579208 4764 scope.go:117] "RemoveContainer" containerID="bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.583027 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8\": container with ID starting with bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8 not found: ID does not exist" containerID="bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.583083 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8"} err="failed to get container status \"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8\": rpc error: code = NotFound desc = could not find container \"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8\": container with ID starting with bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8 not found: ID does not exist" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.583113 4764 scope.go:117] "RemoveContainer" containerID="d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" Jan 27 07:34:34 crc kubenswrapper[4764]: E0127 07:34:34.585151 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b\": container with ID starting with d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b not found: ID does not exist" containerID="d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.585185 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b"} err="failed to get container status \"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b\": rpc error: code = NotFound desc = could not find container \"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b\": container with ID starting with d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b not found: ID does not exist" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.585202 4764 scope.go:117] "RemoveContainer" containerID="bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.585564 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8"} err="failed to get container status \"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8\": rpc error: code = NotFound desc = could not find container \"bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8\": container with ID starting with bb66a0aec9fcaa040ed50f0d62d6f9cd62bebead96e4beb6d6d8e59f2e0cedc8 not found: ID does not exist" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.585590 4764 scope.go:117] "RemoveContainer" containerID="d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.588271 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b"} err="failed to get container status \"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b\": rpc error: code = NotFound desc = could not find container \"d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b\": container with ID starting with d1f475ab6eb88b77d39e5b25287501c079d80c7a3d65b9081d505bc21d699f4b not found: ID does not exist" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.588458 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data" (OuterVolumeSpecName: "config-data") pod "5345db9d-96c7-4d88-a071-ba0995678ce6" (UID: "5345db9d-96c7-4d88-a071-ba0995678ce6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.589668 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.627696 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.638636 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.638667 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.638680 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5345db9d-96c7-4d88-a071-ba0995678ce6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740216 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740359 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740407 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740530 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsml\" (UniqueName: \"kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740566 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740590 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740649 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.740685 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.760945 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.841880 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsml\" (UniqueName: \"kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.841944 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.841968 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842031 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842063 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842137 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842197 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842227 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.842509 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.846661 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.846748 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.847727 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.849590 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.850510 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.850887 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.858174 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsml\" (UniqueName: \"kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.881829 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " pod="openstack/glance-default-external-api-0" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.942956 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data\") pod \"77beac3a-985b-45d4-b804-ff2926d7ab7d\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.943031 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xff77\" (UniqueName: \"kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77\") pod \"77beac3a-985b-45d4-b804-ff2926d7ab7d\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.944138 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle\") pod \"77beac3a-985b-45d4-b804-ff2926d7ab7d\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.944183 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts\") pod \"77beac3a-985b-45d4-b804-ff2926d7ab7d\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.944285 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs\") pod \"77beac3a-985b-45d4-b804-ff2926d7ab7d\" (UID: \"77beac3a-985b-45d4-b804-ff2926d7ab7d\") " Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.944975 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs" (OuterVolumeSpecName: "logs") pod "77beac3a-985b-45d4-b804-ff2926d7ab7d" (UID: "77beac3a-985b-45d4-b804-ff2926d7ab7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.946621 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77beac3a-985b-45d4-b804-ff2926d7ab7d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.948658 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77" (OuterVolumeSpecName: "kube-api-access-xff77") pod "77beac3a-985b-45d4-b804-ff2926d7ab7d" (UID: "77beac3a-985b-45d4-b804-ff2926d7ab7d"). InnerVolumeSpecName "kube-api-access-xff77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.950112 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts" (OuterVolumeSpecName: "scripts") pod "77beac3a-985b-45d4-b804-ff2926d7ab7d" (UID: "77beac3a-985b-45d4-b804-ff2926d7ab7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.978257 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77beac3a-985b-45d4-b804-ff2926d7ab7d" (UID: "77beac3a-985b-45d4-b804-ff2926d7ab7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:34 crc kubenswrapper[4764]: I0127 07:34:34.997016 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data" (OuterVolumeSpecName: "config-data") pod "77beac3a-985b-45d4-b804-ff2926d7ab7d" (UID: "77beac3a-985b-45d4-b804-ff2926d7ab7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.048483 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.048530 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xff77\" (UniqueName: \"kubernetes.io/projected/77beac3a-985b-45d4-b804-ff2926d7ab7d-kube-api-access-xff77\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.048541 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.048550 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77beac3a-985b-45d4-b804-ff2926d7ab7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.153247 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.389404 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerStarted","Data":"dc060c52c96638446bc4068ce183fd46981dacb69d81dfc78b824c5ca5a8dceb"} Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.389753 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerStarted","Data":"9859784a74b709831b7d92e87105e8bba945a25ce3cf7ead857f595e31fd157a"} Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.389764 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerStarted","Data":"cc31961e5ab13f028fc08e642b653874b203e104f988e2a37fdec2508504eda2"} Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.390160 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.392050 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtl6s" event={"ID":"77beac3a-985b-45d4-b804-ff2926d7ab7d","Type":"ContainerDied","Data":"2cf61e4a76bdd8384c44bed5ad2136ec0cf9821bca172ed6747035476b30a099"} Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.392069 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf61e4a76bdd8384c44bed5ad2136ec0cf9821bca172ed6747035476b30a099" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.392108 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtl6s" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.406150 4764 generic.go:334] "Generic (PLEG): container finished" podID="ed1e2f4c-b077-4a04-8edb-4d169e42964e" containerID="1b0c43f340897be7ab9cd3bb916d6490422d39699247c8ee11e6c854c3d10130" exitCode=0 Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.407490 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cz896" event={"ID":"ed1e2f4c-b077-4a04-8edb-4d169e42964e","Type":"ContainerDied","Data":"1b0c43f340897be7ab9cd3bb916d6490422d39699247c8ee11e6c854c3d10130"} Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.407545 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.407654 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.460875 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:34:35 crc kubenswrapper[4764]: E0127 07:34:35.461410 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77beac3a-985b-45d4-b804-ff2926d7ab7d" containerName="placement-db-sync" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.461429 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="77beac3a-985b-45d4-b804-ff2926d7ab7d" containerName="placement-db-sync" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.461664 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="77beac3a-985b-45d4-b804-ff2926d7ab7d" containerName="placement-db-sync" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.462665 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.462798 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76bbb58569-zwwt9" podStartSLOduration=2.462776373 podStartE2EDuration="2.462776373s" podCreationTimestamp="2026-01-27 07:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:35.439056531 +0000 UTC m=+1088.034679057" watchObservedRunningTime="2026-01-27 07:34:35.462776373 +0000 UTC m=+1088.058398899" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.467938 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.468009 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.468274 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-np27c" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.468415 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.468541 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.493044 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.520133 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.544058 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.558250 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.560132 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.563181 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.563483 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579407 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579483 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579515 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579535 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579563 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579600 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579633 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579655 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579679 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579712 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmk9\" (UniqueName: \"kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579741 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579760 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-225sz\" (UniqueName: \"kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579779 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579795 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.579812 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.594765 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682687 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682755 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682808 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmk9\" (UniqueName: \"kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682847 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682872 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-225sz\" (UniqueName: \"kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682891 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682910 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682934 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.682989 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683018 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683051 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683078 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683111 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683144 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683657 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683994 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.683159 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.687110 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.692967 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.693636 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.694874 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.697192 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.698337 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.699036 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.700021 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.706032 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.714477 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.718224 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.720235 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-225sz\" (UniqueName: \"kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.723170 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmk9\" (UniqueName: \"kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9\") pod \"placement-7fbd5ff6f8-6tmz6\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.749088 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.794069 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.840890 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:34:35 crc kubenswrapper[4764]: I0127 07:34:35.914787 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:36 crc kubenswrapper[4764]: I0127 07:34:36.458135 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5345db9d-96c7-4d88-a071-ba0995678ce6" path="/var/lib/kubelet/pods/5345db9d-96c7-4d88-a071-ba0995678ce6/volumes" Jan 27 07:34:36 crc kubenswrapper[4764]: I0127 07:34:36.459332 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b864e103-0adf-4f21-8614-1f15c364feba" path="/var/lib/kubelet/pods/b864e103-0adf-4f21-8614-1f15c364feba/volumes" Jan 27 07:34:38 crc kubenswrapper[4764]: I0127 07:34:38.599358 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:38 crc kubenswrapper[4764]: I0127 07:34:38.599912 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:34:38 crc kubenswrapper[4764]: I0127 07:34:38.689601 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:38 crc kubenswrapper[4764]: I0127 07:34:38.689690 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:34:39 crc kubenswrapper[4764]: W0127 07:34:39.255240 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8645de_4272_4382_bb78_4ec88cdba698.slice/crio-23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7 WatchSource:0}: Error finding container 23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7: Status 404 returned error can't find the container with id 23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7 Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.371090 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.465373 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cz896" event={"ID":"ed1e2f4c-b077-4a04-8edb-4d169e42964e","Type":"ContainerDied","Data":"f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5"} Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.465971 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66e95cf6a5e26470c6af8c7ab9b63a28367825db8a9c72ea31bdac9ab4969c5" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.466054 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cz896" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.468300 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerStarted","Data":"23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7"} Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516255 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516303 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516396 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7nc7\" (UniqueName: \"kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516416 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516450 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.516487 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys\") pod \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\" (UID: \"ed1e2f4c-b077-4a04-8edb-4d169e42964e\") " Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.526643 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts" (OuterVolumeSpecName: "scripts") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.536418 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7" (OuterVolumeSpecName: "kube-api-access-g7nc7") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "kube-api-access-g7nc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.537399 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.537799 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.560391 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data" (OuterVolumeSpecName: "config-data") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.573360 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed1e2f4c-b077-4a04-8edb-4d169e42964e" (UID: "ed1e2f4c-b077-4a04-8edb-4d169e42964e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627266 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7nc7\" (UniqueName: \"kubernetes.io/projected/ed1e2f4c-b077-4a04-8edb-4d169e42964e-kube-api-access-g7nc7\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627319 4764 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627344 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627359 4764 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627373 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:39 crc kubenswrapper[4764]: I0127 07:34:39.627384 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1e2f4c-b077-4a04-8edb-4d169e42964e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.009913 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.020897 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:34:40 crc kubenswrapper[4764]: W0127 07:34:40.024851 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be230ef_8bfc_453c_9653_dcae5c70bee7.slice/crio-6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119 WatchSource:0}: Error finding container 6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119: Status 404 returned error can't find the container with id 6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119 Jan 27 07:34:40 crc kubenswrapper[4764]: W0127 07:34:40.029679 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c441fce_1c1f_4e6c_8fbd_12ef92f35f25.slice/crio-69d09b50c221471e14c80d77fc06f0d6948f0b481b83891992d914c45215b980 WatchSource:0}: Error finding container 69d09b50c221471e14c80d77fc06f0d6948f0b481b83891992d914c45215b980: Status 404 returned error can't find the container with id 69d09b50c221471e14c80d77fc06f0d6948f0b481b83891992d914c45215b980 Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.479504 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerStarted","Data":"9e1ebbf1bfe65cb5a2829a73343035c9b240332bd9abcc8543b1bb9cf4c16520"} Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.484358 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerStarted","Data":"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083"} Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.484401 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerStarted","Data":"69d09b50c221471e14c80d77fc06f0d6948f0b481b83891992d914c45215b980"} Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.498824 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerStarted","Data":"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223"} Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.500779 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerStarted","Data":"6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119"} Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.527949 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c7589cf-vmvrw"] Jan 27 07:34:40 crc kubenswrapper[4764]: E0127 07:34:40.528306 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e2f4c-b077-4a04-8edb-4d169e42964e" containerName="keystone-bootstrap" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.528323 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e2f4c-b077-4a04-8edb-4d169e42964e" containerName="keystone-bootstrap" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.528552 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e2f4c-b077-4a04-8edb-4d169e42964e" containerName="keystone-bootstrap" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.529161 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.543736 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.544065 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.544194 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.544312 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.544452 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88tdq" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.544734 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.546951 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c7589cf-vmvrw"] Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.569675 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-scripts\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.569755 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-fernet-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570006 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-public-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570038 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-combined-ca-bundle\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570118 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-config-data\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570139 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-internal-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570317 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-credential-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.570355 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5hqz\" (UniqueName: \"kubernetes.io/projected/b7ede279-fbc7-439d-9b05-95bb8705cbbb-kube-api-access-p5hqz\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.671787 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-config-data\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673460 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-internal-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673580 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-credential-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673662 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5hqz\" (UniqueName: \"kubernetes.io/projected/b7ede279-fbc7-439d-9b05-95bb8705cbbb-kube-api-access-p5hqz\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673736 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-scripts\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673795 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-fernet-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673819 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-public-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.673869 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-combined-ca-bundle\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.679940 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-scripts\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.681017 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-credential-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.681991 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-public-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.686483 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-config-data\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.691638 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-internal-tls-certs\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.692404 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-fernet-keys\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.696069 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5hqz\" (UniqueName: \"kubernetes.io/projected/b7ede279-fbc7-439d-9b05-95bb8705cbbb-kube-api-access-p5hqz\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.696670 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ede279-fbc7-439d-9b05-95bb8705cbbb-combined-ca-bundle\") pod \"keystone-7c7589cf-vmvrw\" (UID: \"b7ede279-fbc7-439d-9b05-95bb8705cbbb\") " pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:40 crc kubenswrapper[4764]: I0127 07:34:40.884747 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.475038 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c7589cf-vmvrw"] Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.512949 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerStarted","Data":"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e"} Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.514228 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.515719 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c7589cf-vmvrw" event={"ID":"b7ede279-fbc7-439d-9b05-95bb8705cbbb","Type":"ContainerStarted","Data":"344215ca70bf0d43b8ee7479500ed8bd5a85808d249c71844c63b1580c0adf40"} Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.516227 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.519747 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerStarted","Data":"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c"} Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.522044 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerStarted","Data":"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68"} Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.550119 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fbd5ff6f8-6tmz6" podStartSLOduration=6.55009149 podStartE2EDuration="6.55009149s" podCreationTimestamp="2026-01-27 07:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:41.539595615 +0000 UTC m=+1094.135218141" watchObservedRunningTime="2026-01-27 07:34:41.55009149 +0000 UTC m=+1094.145714016" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.582591 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.582568711 podStartE2EDuration="7.582568711s" podCreationTimestamp="2026-01-27 07:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:41.574166491 +0000 UTC m=+1094.169789017" watchObservedRunningTime="2026-01-27 07:34:41.582568711 +0000 UTC m=+1094.178191247" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.699058 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.772003 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:41 crc kubenswrapper[4764]: I0127 07:34:41.772634 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="dnsmasq-dns" containerID="cri-o://64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef" gracePeriod=10 Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.380661 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427387 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427539 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427580 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427663 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427727 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.427771 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhsr\" (UniqueName: \"kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr\") pod \"97bf0eb8-706c-4461-92a1-6629d0c48905\" (UID: \"97bf0eb8-706c-4461-92a1-6629d0c48905\") " Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.468645 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr" (OuterVolumeSpecName: "kube-api-access-pqhsr") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "kube-api-access-pqhsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.518172 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.526044 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.530697 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.530734 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.530749 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhsr\" (UniqueName: \"kubernetes.io/projected/97bf0eb8-706c-4461-92a1-6629d0c48905-kube-api-access-pqhsr\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.550218 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.558766 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.558781 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerStarted","Data":"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756"} Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.563381 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config" (OuterVolumeSpecName: "config") pod "97bf0eb8-706c-4461-92a1-6629d0c48905" (UID: "97bf0eb8-706c-4461-92a1-6629d0c48905"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.570198 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c7589cf-vmvrw" event={"ID":"b7ede279-fbc7-439d-9b05-95bb8705cbbb","Type":"ContainerStarted","Data":"a02262df2236639bc46cd82630e8b9b9d13a16fdaba1ba17826d4c0f260e5f70"} Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.570593 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.579716 4764 generic.go:334] "Generic (PLEG): container finished" podID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerID="64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef" exitCode=0 Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.579875 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.580200 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" event={"ID":"97bf0eb8-706c-4461-92a1-6629d0c48905","Type":"ContainerDied","Data":"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef"} Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.580296 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-9b292" event={"ID":"97bf0eb8-706c-4461-92a1-6629d0c48905","Type":"ContainerDied","Data":"9de2b491cf60b10f195dbe127fb1c67a4c8262b3484c0dc67b4ad3761378b842"} Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.580354 4764 scope.go:117] "RemoveContainer" containerID="64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.600313 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.600288444 podStartE2EDuration="7.600288444s" podCreationTimestamp="2026-01-27 07:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:42.585430705 +0000 UTC m=+1095.181053251" watchObservedRunningTime="2026-01-27 07:34:42.600288444 +0000 UTC m=+1095.195910980" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.610460 4764 scope.go:117] "RemoveContainer" containerID="28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.616354 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c7589cf-vmvrw" podStartSLOduration=2.616335155 podStartE2EDuration="2.616335155s" podCreationTimestamp="2026-01-27 07:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:42.614075846 +0000 UTC m=+1095.209698372" watchObservedRunningTime="2026-01-27 07:34:42.616335155 +0000 UTC m=+1095.211957681" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.631984 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.632011 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.632022 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97bf0eb8-706c-4461-92a1-6629d0c48905-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.641682 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.650306 4764 scope.go:117] "RemoveContainer" containerID="64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef" Jan 27 07:34:42 crc kubenswrapper[4764]: E0127 07:34:42.650998 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef\": container with ID starting with 64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef not found: ID does not exist" containerID="64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.651047 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef"} err="failed to get container status \"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef\": rpc error: code = NotFound desc = could not find container \"64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef\": container with ID starting with 64d5f7a0acf5788a0c71ee18a7fd90e01287087c4ed369b824f677029a6938ef not found: ID does not exist" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.651083 4764 scope.go:117] "RemoveContainer" containerID="28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.652076 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-9b292"] Jan 27 07:34:42 crc kubenswrapper[4764]: E0127 07:34:42.658666 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be\": container with ID starting with 28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be not found: ID does not exist" containerID="28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be" Jan 27 07:34:42 crc kubenswrapper[4764]: I0127 07:34:42.658701 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be"} err="failed to get container status \"28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be\": rpc error: code = NotFound desc = could not find container \"28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be\": container with ID starting with 28723c5e5f52a8f978559bb063ac5ba9104a860c8491227fd795c81fc41b58be not found: ID does not exist" Jan 27 07:34:43 crc kubenswrapper[4764]: I0127 07:34:43.599047 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4tp44" event={"ID":"7cfdc388-3353-43e2-99b0-7c6e17fb78f9","Type":"ContainerStarted","Data":"6839d75e46748a65e0f30ead1d46207337d62e682f8ba4f686b0791f6c3d7132"} Jan 27 07:34:43 crc kubenswrapper[4764]: I0127 07:34:43.627082 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4tp44" podStartSLOduration=2.524626584 podStartE2EDuration="43.627059014s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="2026-01-27 07:34:01.887210872 +0000 UTC m=+1054.482833398" lastFinishedPulling="2026-01-27 07:34:42.989643302 +0000 UTC m=+1095.585265828" observedRunningTime="2026-01-27 07:34:43.620467762 +0000 UTC m=+1096.216090298" watchObservedRunningTime="2026-01-27 07:34:43.627059014 +0000 UTC m=+1096.222681550" Jan 27 07:34:44 crc kubenswrapper[4764]: I0127 07:34:44.453566 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" path="/var/lib/kubelet/pods/97bf0eb8-706c-4461-92a1-6629d0c48905/volumes" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.154266 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.154557 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.196594 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.206863 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.623522 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7875w" event={"ID":"64c3a86b-6e48-4aa4-950e-d8ecf643cf48","Type":"ContainerStarted","Data":"3b14b95d8e8a03d5cdbc55e03ce8bd960f1670c16840466cddcb5a316eb64d92"} Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.623735 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.623845 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.916828 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.916880 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.953808 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.968841 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:45 crc kubenswrapper[4764]: I0127 07:34:45.983858 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7875w" podStartSLOduration=4.146402174 podStartE2EDuration="45.983840904s" podCreationTimestamp="2026-01-27 07:34:00 +0000 UTC" firstStartedPulling="2026-01-27 07:34:02.103801671 +0000 UTC m=+1054.699424197" lastFinishedPulling="2026-01-27 07:34:43.941240381 +0000 UTC m=+1096.536862927" observedRunningTime="2026-01-27 07:34:45.648271236 +0000 UTC m=+1098.243893762" watchObservedRunningTime="2026-01-27 07:34:45.983840904 +0000 UTC m=+1098.579463430" Jan 27 07:34:46 crc kubenswrapper[4764]: I0127 07:34:46.633475 4764 generic.go:334] "Generic (PLEG): container finished" podID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" containerID="6839d75e46748a65e0f30ead1d46207337d62e682f8ba4f686b0791f6c3d7132" exitCode=0 Jan 27 07:34:46 crc kubenswrapper[4764]: I0127 07:34:46.633557 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4tp44" event={"ID":"7cfdc388-3353-43e2-99b0-7c6e17fb78f9","Type":"ContainerDied","Data":"6839d75e46748a65e0f30ead1d46207337d62e682f8ba4f686b0791f6c3d7132"} Jan 27 07:34:46 crc kubenswrapper[4764]: I0127 07:34:46.633831 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:46 crc kubenswrapper[4764]: I0127 07:34:46.634001 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:47 crc kubenswrapper[4764]: I0127 07:34:47.730318 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:34:47 crc kubenswrapper[4764]: I0127 07:34:47.730417 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:34:47 crc kubenswrapper[4764]: I0127 07:34:47.733825 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:34:48 crc kubenswrapper[4764]: I0127 07:34:48.601939 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 27 07:34:48 crc kubenswrapper[4764]: I0127 07:34:48.695821 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5ff9bfcff8-v9nrc" podUID="4acbb02d-c98c-4b45-bc09-dd13fe383502" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 27 07:34:48 crc kubenswrapper[4764]: I0127 07:34:48.877603 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:48 crc kubenswrapper[4764]: I0127 07:34:48.877704 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.391768 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.468023 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6hx\" (UniqueName: \"kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx\") pod \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.468143 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle\") pod \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.468198 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data\") pod \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\" (UID: \"7cfdc388-3353-43e2-99b0-7c6e17fb78f9\") " Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.472426 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.473194 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx" (OuterVolumeSpecName: "kube-api-access-nq6hx") pod "7cfdc388-3353-43e2-99b0-7c6e17fb78f9" (UID: "7cfdc388-3353-43e2-99b0-7c6e17fb78f9"). InnerVolumeSpecName "kube-api-access-nq6hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.480155 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7cfdc388-3353-43e2-99b0-7c6e17fb78f9" (UID: "7cfdc388-3353-43e2-99b0-7c6e17fb78f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.529738 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfdc388-3353-43e2-99b0-7c6e17fb78f9" (UID: "7cfdc388-3353-43e2-99b0-7c6e17fb78f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.573180 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6hx\" (UniqueName: \"kubernetes.io/projected/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-kube-api-access-nq6hx\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.573220 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.573232 4764 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7cfdc388-3353-43e2-99b0-7c6e17fb78f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.657926 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4tp44" event={"ID":"7cfdc388-3353-43e2-99b0-7c6e17fb78f9","Type":"ContainerDied","Data":"ec9fc1a17a76b8cac95708f63cb8a5b9a38ad63d2ec4b176713ecf1c4c585d7c"} Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.657974 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec9fc1a17a76b8cac95708f63cb8a5b9a38ad63d2ec4b176713ecf1c4c585d7c" Jan 27 07:34:49 crc kubenswrapper[4764]: I0127 07:34:49.657945 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4tp44" Jan 27 07:34:49 crc kubenswrapper[4764]: E0127 07:34:49.680180 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="303940fa-42c3-4597-a545-66c946caf680" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.669711 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerStarted","Data":"dc63a7586900af41b9e6f5c943889ef8c847e9692b7949d325b49eb4363852b2"} Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.669883 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="ceilometer-notification-agent" containerID="cri-o://8fceea8466a9b71b43b381a540b8debd35f1a337ccced3e7b157abacd7e45483" gracePeriod=30 Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.669985 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.670302 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="proxy-httpd" containerID="cri-o://dc63a7586900af41b9e6f5c943889ef8c847e9692b7949d325b49eb4363852b2" gracePeriod=30 Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.670353 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="sg-core" containerID="cri-o://9e1ebbf1bfe65cb5a2829a73343035c9b240332bd9abcc8543b1bb9cf4c16520" gracePeriod=30 Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.781478 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:34:50 crc kubenswrapper[4764]: E0127 07:34:50.781773 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="init" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.781784 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="init" Jan 27 07:34:50 crc kubenswrapper[4764]: E0127 07:34:50.781800 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="dnsmasq-dns" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.781806 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="dnsmasq-dns" Jan 27 07:34:50 crc kubenswrapper[4764]: E0127 07:34:50.781831 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" containerName="barbican-db-sync" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.781838 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" containerName="barbican-db-sync" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.782003 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" containerName="barbican-db-sync" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.782017 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bf0eb8-706c-4461-92a1-6629d0c48905" containerName="dnsmasq-dns" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.782867 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.786195 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.786401 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.786688 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gg7tp" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.821270 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.830837 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.830937 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.836397 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.849370 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.910751 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.910831 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64n55\" (UniqueName: \"kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.910880 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.910933 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6gb\" (UniqueName: \"kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.910976 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.911005 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.911051 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.911103 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.911172 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.911196 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.918490 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.919918 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.942004 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.984789 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.986682 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:50 crc kubenswrapper[4764]: I0127 07:34:50.989145 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.006354 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013275 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013339 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013372 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013405 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013454 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013478 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013499 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013519 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013534 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013566 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013582 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013600 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bg7j\" (UniqueName: \"kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013623 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013641 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64n55\" (UniqueName: \"kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013665 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013686 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013714 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013734 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k985k\" (UniqueName: \"kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013758 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013779 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6gb\" (UniqueName: \"kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.013796 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.018621 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.019160 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.019475 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.022947 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.028997 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.034127 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.035024 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.044589 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.047638 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64n55\" (UniqueName: \"kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55\") pod \"barbican-worker-57b4cd6565-zcddr\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.051151 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6gb\" (UniqueName: \"kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb\") pod \"barbican-keystone-listener-dffd4f5f4-gww9r\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116354 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116399 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bg7j\" (UniqueName: \"kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116430 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116478 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116510 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116530 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k985k\" (UniqueName: \"kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116552 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116601 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116642 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116665 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.116680 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.117109 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.117364 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.117759 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.118282 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.118317 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.118853 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.125707 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.125706 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.126329 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.136541 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bg7j\" (UniqueName: \"kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j\") pod \"dnsmasq-dns-7bdf86f46f-hr99f\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.138362 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k985k\" (UniqueName: \"kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k\") pod \"barbican-api-57b65668f8-l79bk\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.148268 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.162792 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.244182 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.315134 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.686241 4764 generic.go:334] "Generic (PLEG): container finished" podID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" containerID="3b14b95d8e8a03d5cdbc55e03ce8bd960f1670c16840466cddcb5a316eb64d92" exitCode=0 Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.686289 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7875w" event={"ID":"64c3a86b-6e48-4aa4-950e-d8ecf643cf48","Type":"ContainerDied","Data":"3b14b95d8e8a03d5cdbc55e03ce8bd960f1670c16840466cddcb5a316eb64d92"} Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.689833 4764 generic.go:334] "Generic (PLEG): container finished" podID="303940fa-42c3-4597-a545-66c946caf680" containerID="dc63a7586900af41b9e6f5c943889ef8c847e9692b7949d325b49eb4363852b2" exitCode=0 Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.689867 4764 generic.go:334] "Generic (PLEG): container finished" podID="303940fa-42c3-4597-a545-66c946caf680" containerID="9e1ebbf1bfe65cb5a2829a73343035c9b240332bd9abcc8543b1bb9cf4c16520" exitCode=2 Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.689888 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerDied","Data":"dc63a7586900af41b9e6f5c943889ef8c847e9692b7949d325b49eb4363852b2"} Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.689908 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerDied","Data":"9e1ebbf1bfe65cb5a2829a73343035c9b240332bd9abcc8543b1bb9cf4c16520"} Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.704696 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.791078 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:34:51 crc kubenswrapper[4764]: W0127 07:34:51.804786 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3c6d80_40f9_4109_a486_af6c7f42cbf6.slice/crio-357e1717b823573116a5f518b4a8322c5be22c5b8f4ae530f18bb76e1784f86f WatchSource:0}: Error finding container 357e1717b823573116a5f518b4a8322c5be22c5b8f4ae530f18bb76e1784f86f: Status 404 returned error can't find the container with id 357e1717b823573116a5f518b4a8322c5be22c5b8f4ae530f18bb76e1784f86f Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.926413 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:34:51 crc kubenswrapper[4764]: I0127 07:34:51.945467 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.705720 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerStarted","Data":"e43a28bf9f2893daea65f365081c9ba1c9c3cf8b6cb7d3ed556d2cc2b5ea32ae"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.706087 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerStarted","Data":"5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.706098 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerStarted","Data":"6d744f2a87161ee1961da29109e2eacde218ad3fa256a159ca53cd46b17f2eb6"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.706111 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.710377 4764 generic.go:334] "Generic (PLEG): container finished" podID="89f76719-e428-4f69-9885-0d763981b164" containerID="3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc" exitCode=0 Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.710573 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" event={"ID":"89f76719-e428-4f69-9885-0d763981b164","Type":"ContainerDied","Data":"3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.710621 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" event={"ID":"89f76719-e428-4f69-9885-0d763981b164","Type":"ContainerStarted","Data":"88d9b8dcfd266a7ad3db8ccb592a62fb061d5ccedcd0c38d6bd199c82063581d"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.717387 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerStarted","Data":"357e1717b823573116a5f518b4a8322c5be22c5b8f4ae530f18bb76e1784f86f"} Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.726263 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57b65668f8-l79bk" podStartSLOduration=2.726245096 podStartE2EDuration="2.726245096s" podCreationTimestamp="2026-01-27 07:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:52.721938053 +0000 UTC m=+1105.317560579" watchObservedRunningTime="2026-01-27 07:34:52.726245096 +0000 UTC m=+1105.321867622" Jan 27 07:34:52 crc kubenswrapper[4764]: I0127 07:34:52.727219 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerStarted","Data":"6e3b9a44c86e75e2ba40203e00c14ba2c5a9fb10fb0ecd07949b70e708873544"} Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.645970 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.702832 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.702890 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.702984 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxf4\" (UniqueName: \"kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.703037 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.703086 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.703123 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id\") pod \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\" (UID: \"64c3a86b-6e48-4aa4-950e-d8ecf643cf48\") " Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.703620 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.716559 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4" (OuterVolumeSpecName: "kube-api-access-zgxf4") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "kube-api-access-zgxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.727752 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.727978 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts" (OuterVolumeSpecName: "scripts") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.737640 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.761078 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7875w" event={"ID":"64c3a86b-6e48-4aa4-950e-d8ecf643cf48","Type":"ContainerDied","Data":"ed12c56bd8975c67c470bd8a4c68a399075013e263822bf50f7cc6af593b72bf"} Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.761254 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed12c56bd8975c67c470bd8a4c68a399075013e263822bf50f7cc6af593b72bf" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.761227 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7875w" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.763610 4764 generic.go:334] "Generic (PLEG): container finished" podID="303940fa-42c3-4597-a545-66c946caf680" containerID="8fceea8466a9b71b43b381a540b8debd35f1a337ccced3e7b157abacd7e45483" exitCode=0 Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.763668 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerDied","Data":"8fceea8466a9b71b43b381a540b8debd35f1a337ccced3e7b157abacd7e45483"} Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.805136 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.805161 4764 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.805170 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxf4\" (UniqueName: \"kubernetes.io/projected/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-kube-api-access-zgxf4\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.805178 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.805186 4764 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.806512 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" event={"ID":"89f76719-e428-4f69-9885-0d763981b164","Type":"ContainerStarted","Data":"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df"} Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.806568 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.806599 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.826598 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data" (OuterVolumeSpecName: "config-data") pod "64c3a86b-6e48-4aa4-950e-d8ecf643cf48" (UID: "64c3a86b-6e48-4aa4-950e-d8ecf643cf48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.829342 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:34:53 crc kubenswrapper[4764]: E0127 07:34:53.829972 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" containerName="cinder-db-sync" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.829988 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" containerName="cinder-db-sync" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.830151 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" containerName="cinder-db-sync" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.831119 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.833476 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.855281 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.856085 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.864220 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" podStartSLOduration=3.864205061 podStartE2EDuration="3.864205061s" podCreationTimestamp="2026-01-27 07:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:53.862183478 +0000 UTC m=+1106.457806004" watchObservedRunningTime="2026-01-27 07:34:53.864205061 +0000 UTC m=+1106.459827587" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.907755 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.907834 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.907889 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.907912 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.907956 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.908087 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.908121 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.908217 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c3a86b-6e48-4aa4-950e-d8ecf643cf48-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.955215 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.960091 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.966347 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 07:34:53 crc kubenswrapper[4764]: I0127 07:34:53.983586 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009645 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009667 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009698 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009734 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009751 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjdj\" (UniqueName: \"kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009797 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009852 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009885 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009905 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009929 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009952 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.009998 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.013592 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.016784 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.018148 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.018208 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.020958 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.025418 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.035460 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk\") pod \"barbican-api-846df5dc9d-clqgc\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.062416 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.110110 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.112582 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.114334 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.114383 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.114420 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.114471 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjdj\" (UniqueName: \"kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.114513 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.115385 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.122989 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.125193 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.127162 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.127416 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.128689 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.128768 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.152226 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjdj\" (UniqueName: \"kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj\") pod \"cinder-scheduler-0\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.205980 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220601 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220671 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220717 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220775 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220802 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.220823 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtkf\" (UniqueName: \"kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.227377 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.228892 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.231917 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.240942 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.292550 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322505 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tft\" (UniqueName: \"kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322606 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322692 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322724 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322765 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322853 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.322899 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323008 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtkf\" (UniqueName: \"kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323104 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323155 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323198 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323294 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.323327 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.324031 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.324056 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.324163 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.326834 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.327179 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.343077 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtkf\" (UniqueName: \"kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf\") pod \"dnsmasq-dns-75bfc9b94f-nqjwf\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425060 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425115 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425169 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425202 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52tft\" (UniqueName: \"kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425277 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425316 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425331 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425420 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.425841 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.428924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.428982 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.429531 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.429965 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.443318 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tft\" (UniqueName: \"kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft\") pod \"cinder-api-0\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " pod="openstack/cinder-api-0" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.460640 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:54 crc kubenswrapper[4764]: I0127 07:34:54.551851 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.056113 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139464 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139542 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139600 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139638 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139726 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139802 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czpgl\" (UniqueName: \"kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.139873 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml\") pod \"303940fa-42c3-4597-a545-66c946caf680\" (UID: \"303940fa-42c3-4597-a545-66c946caf680\") " Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.140578 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.141256 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.145976 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl" (OuterVolumeSpecName: "kube-api-access-czpgl") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "kube-api-access-czpgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.146024 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts" (OuterVolumeSpecName: "scripts") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.177349 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.198018 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242194 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czpgl\" (UniqueName: \"kubernetes.io/projected/303940fa-42c3-4597-a545-66c946caf680-kube-api-access-czpgl\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242229 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242241 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242252 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242263 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.242275 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/303940fa-42c3-4597-a545-66c946caf680-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.270897 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data" (OuterVolumeSpecName: "config-data") pod "303940fa-42c3-4597-a545-66c946caf680" (UID: "303940fa-42c3-4597-a545-66c946caf680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.276727 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.351530 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/303940fa-42c3-4597-a545-66c946caf680-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.390912 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.404401 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.419393 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.840817 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerStarted","Data":"e274b26aac78830b0e6df787b4a0361d57f28eb63aa29aeea226aef360e3be5e"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.847945 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerStarted","Data":"9ca93dd44e8c6459b45c1f99596f7b59cb9c6b80732d5750e8d73e63ee666d63"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.866686 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerStarted","Data":"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.866750 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerStarted","Data":"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.891493 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerStarted","Data":"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.891835 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerStarted","Data":"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.895776 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" podStartSLOduration=2.93200078 podStartE2EDuration="5.895758864s" podCreationTimestamp="2026-01-27 07:34:50 +0000 UTC" firstStartedPulling="2026-01-27 07:34:51.738943911 +0000 UTC m=+1104.334566437" lastFinishedPulling="2026-01-27 07:34:54.702701995 +0000 UTC m=+1107.298324521" observedRunningTime="2026-01-27 07:34:55.8910329 +0000 UTC m=+1108.486655436" watchObservedRunningTime="2026-01-27 07:34:55.895758864 +0000 UTC m=+1108.491381390" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.908083 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerStarted","Data":"e2d9b61987cf55c1f03145ebe3849312bba4c6a23c1421dd088c7ec54f10db24"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.924658 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57b4cd6565-zcddr" podStartSLOduration=3.035153225 podStartE2EDuration="5.924640021s" podCreationTimestamp="2026-01-27 07:34:50 +0000 UTC" firstStartedPulling="2026-01-27 07:34:51.813189998 +0000 UTC m=+1104.408812524" lastFinishedPulling="2026-01-27 07:34:54.702676794 +0000 UTC m=+1107.298299320" observedRunningTime="2026-01-27 07:34:55.91279814 +0000 UTC m=+1108.508420666" watchObservedRunningTime="2026-01-27 07:34:55.924640021 +0000 UTC m=+1108.520262537" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.928344 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" event={"ID":"1201c328-fd0d-47ca-a25e-756f49187e19","Type":"ContainerStarted","Data":"7ddb3bcf1c9e3937c96e910e459d372de22cd8631230e580bcbe9c830682f021"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.959970 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="dnsmasq-dns" containerID="cri-o://1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df" gracePeriod=10 Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.960191 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.961235 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"303940fa-42c3-4597-a545-66c946caf680","Type":"ContainerDied","Data":"7ba0fabdf694d952b7406fa2d89189f4c901414055b6454c5b845efb5268e354"} Jan 27 07:34:55 crc kubenswrapper[4764]: I0127 07:34:55.961312 4764 scope.go:117] "RemoveContainer" containerID="dc63a7586900af41b9e6f5c943889ef8c847e9692b7949d325b49eb4363852b2" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.091449 4764 scope.go:117] "RemoveContainer" containerID="9e1ebbf1bfe65cb5a2829a73343035c9b240332bd9abcc8543b1bb9cf4c16520" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.136803 4764 scope.go:117] "RemoveContainer" containerID="8fceea8466a9b71b43b381a540b8debd35f1a337ccced3e7b157abacd7e45483" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.146234 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.156407 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.176618 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:56 crc kubenswrapper[4764]: E0127 07:34:56.177034 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="sg-core" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177046 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="sg-core" Jan 27 07:34:56 crc kubenswrapper[4764]: E0127 07:34:56.177079 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="ceilometer-notification-agent" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177087 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="ceilometer-notification-agent" Jan 27 07:34:56 crc kubenswrapper[4764]: E0127 07:34:56.177096 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="proxy-httpd" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177103 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="proxy-httpd" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177266 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="sg-core" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177286 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="proxy-httpd" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.177296 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="303940fa-42c3-4597-a545-66c946caf680" containerName="ceilometer-notification-agent" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.178848 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.181251 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.181777 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.197044 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.272427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.272866 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.272961 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.272979 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.273004 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp728\" (UniqueName: \"kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.273049 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.273073 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375432 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375560 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375731 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375754 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375801 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp728\" (UniqueName: \"kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375873 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.375905 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.379348 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.381930 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.382796 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.390138 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.390367 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.402054 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp728\" (UniqueName: \"kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.403348 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.463024 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303940fa-42c3-4597-a545-66c946caf680" path="/var/lib/kubelet/pods/303940fa-42c3-4597-a545-66c946caf680/volumes" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.478640 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.502863 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.701898 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.702135 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.702176 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.702266 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.702314 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.702553 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bg7j\" (UniqueName: \"kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j\") pod \"89f76719-e428-4f69-9885-0d763981b164\" (UID: \"89f76719-e428-4f69-9885-0d763981b164\") " Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.717159 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j" (OuterVolumeSpecName: "kube-api-access-6bg7j") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "kube-api-access-6bg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.776248 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config" (OuterVolumeSpecName: "config") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.794182 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.805799 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.812628 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.812688 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.812703 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.812717 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bg7j\" (UniqueName: \"kubernetes.io/projected/89f76719-e428-4f69-9885-0d763981b164-kube-api-access-6bg7j\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.816909 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.834315 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89f76719-e428-4f69-9885-0d763981b164" (UID: "89f76719-e428-4f69-9885-0d763981b164"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.917623 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.917673 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f76719-e428-4f69-9885-0d763981b164-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.990026 4764 generic.go:334] "Generic (PLEG): container finished" podID="1201c328-fd0d-47ca-a25e-756f49187e19" containerID="097b9f1173d270d7e7fdc6be78a5d6438382a0932a03abdc51c1c16ab934723a" exitCode=0 Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.990109 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" event={"ID":"1201c328-fd0d-47ca-a25e-756f49187e19","Type":"ContainerDied","Data":"097b9f1173d270d7e7fdc6be78a5d6438382a0932a03abdc51c1c16ab934723a"} Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.990134 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" event={"ID":"1201c328-fd0d-47ca-a25e-756f49187e19","Type":"ContainerStarted","Data":"4ae43e1bd6886cc6de4b6070681b1f52c0ccbbaa7779cca6a72840027ce9dbfa"} Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.990245 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.999328 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerStarted","Data":"ab2ec021a42ef7058513fbd77a0e52f702b82f4f545c319a39f029bb37028471"} Jan 27 07:34:56 crc kubenswrapper[4764]: I0127 07:34:56.999396 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerStarted","Data":"bf330d9035fae199f54da317068caaf0e877021e58a08135ffd390f53a3a9bcd"} Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:56.999896 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.000027 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.006045 4764 generic.go:334] "Generic (PLEG): container finished" podID="89f76719-e428-4f69-9885-0d763981b164" containerID="1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df" exitCode=0 Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.006175 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" event={"ID":"89f76719-e428-4f69-9885-0d763981b164","Type":"ContainerDied","Data":"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df"} Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.006246 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" event={"ID":"89f76719-e428-4f69-9885-0d763981b164","Type":"ContainerDied","Data":"88d9b8dcfd266a7ad3db8ccb592a62fb061d5ccedcd0c38d6bd199c82063581d"} Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.006275 4764 scope.go:117] "RemoveContainer" containerID="1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.006549 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hr99f" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.013848 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" podStartSLOduration=3.013829997 podStartE2EDuration="3.013829997s" podCreationTimestamp="2026-01-27 07:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:57.012193885 +0000 UTC m=+1109.607816411" watchObservedRunningTime="2026-01-27 07:34:57.013829997 +0000 UTC m=+1109.609452513" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.020255 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerStarted","Data":"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576"} Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.033847 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-846df5dc9d-clqgc" podStartSLOduration=4.033828422 podStartE2EDuration="4.033828422s" podCreationTimestamp="2026-01-27 07:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:57.029680133 +0000 UTC m=+1109.625302659" watchObservedRunningTime="2026-01-27 07:34:57.033828422 +0000 UTC m=+1109.629450948" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.071478 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.076775 4764 scope.go:117] "RemoveContainer" containerID="3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.096533 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hr99f"] Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.107569 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.119957 4764 scope.go:117] "RemoveContainer" containerID="1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df" Jan 27 07:34:57 crc kubenswrapper[4764]: E0127 07:34:57.123296 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df\": container with ID starting with 1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df not found: ID does not exist" containerID="1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.123353 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df"} err="failed to get container status \"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df\": rpc error: code = NotFound desc = could not find container \"1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df\": container with ID starting with 1341b25ced48d1c4147dc7d8a997d03e568e1dfbfbcdc2f3cc7af9e892f0e1df not found: ID does not exist" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.123387 4764 scope.go:117] "RemoveContainer" containerID="3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc" Jan 27 07:34:57 crc kubenswrapper[4764]: E0127 07:34:57.124234 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc\": container with ID starting with 3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc not found: ID does not exist" containerID="3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.124263 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc"} err="failed to get container status \"3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc\": rpc error: code = NotFound desc = could not find container \"3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc\": container with ID starting with 3a4ef6a7cfc25c054aeafebf4cb143a8de7553f6948d4c7e52f779b5561f59dc not found: ID does not exist" Jan 27 07:34:57 crc kubenswrapper[4764]: I0127 07:34:57.230776 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.031398 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerStarted","Data":"2467e178b17cb6ff3a26316eff1553e12bd2ce90d674b1c764c2100b0b50377b"} Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.031726 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerStarted","Data":"9e2386123230d78ab573f8f22c02cdc58fa1969ab62cba414f9847231010108d"} Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.033289 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerStarted","Data":"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311"} Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.033405 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api-log" containerID="cri-o://47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" gracePeriod=30 Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.033469 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.033473 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api" containerID="cri-o://31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" gracePeriod=30 Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.048335 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerStarted","Data":"766e5b33afbe167b9f3ae721b773d96abd00a5f5129045a69125c857b6ee60a6"} Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.048390 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerStarted","Data":"85921d8c99cdb39f7de2deceef1630a2f92b2c170052347c9831be2d596a2560"} Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.052880 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.052864469 podStartE2EDuration="4.052864469s" podCreationTimestamp="2026-01-27 07:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:34:58.051418291 +0000 UTC m=+1110.647040817" watchObservedRunningTime="2026-01-27 07:34:58.052864469 +0000 UTC m=+1110.648486995" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.076651 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.166706576 podStartE2EDuration="5.076630062s" podCreationTimestamp="2026-01-27 07:34:53 +0000 UTC" firstStartedPulling="2026-01-27 07:34:55.450552442 +0000 UTC m=+1108.046174968" lastFinishedPulling="2026-01-27 07:34:56.360475928 +0000 UTC m=+1108.956098454" observedRunningTime="2026-01-27 07:34:58.071296882 +0000 UTC m=+1110.666919408" watchObservedRunningTime="2026-01-27 07:34:58.076630062 +0000 UTC m=+1110.672252588" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.452365 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f76719-e428-4f69-9885-0d763981b164" path="/var/lib/kubelet/pods/89f76719-e428-4f69-9885-0d763981b164/volumes" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.811817 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856526 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856656 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856735 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856799 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856892 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52tft\" (UniqueName: \"kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856950 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.856978 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs\") pod \"90afe5f8-7032-4882-b8c7-6489375421d4\" (UID: \"90afe5f8-7032-4882-b8c7-6489375421d4\") " Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.857804 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs" (OuterVolumeSpecName: "logs") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.857850 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.869573 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts" (OuterVolumeSpecName: "scripts") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.882601 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.908562 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.922637 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft" (OuterVolumeSpecName: "kube-api-access-52tft") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "kube-api-access-52tft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.945011 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data" (OuterVolumeSpecName: "config-data") pod "90afe5f8-7032-4882-b8c7-6489375421d4" (UID: "90afe5f8-7032-4882-b8c7-6489375421d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960825 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960874 4764 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90afe5f8-7032-4882-b8c7-6489375421d4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960888 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52tft\" (UniqueName: \"kubernetes.io/projected/90afe5f8-7032-4882-b8c7-6489375421d4-kube-api-access-52tft\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960897 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960905 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90afe5f8-7032-4882-b8c7-6489375421d4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960914 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:58 crc kubenswrapper[4764]: I0127 07:34:58.960941 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90afe5f8-7032-4882-b8c7-6489375421d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.066300 4764 generic.go:334] "Generic (PLEG): container finished" podID="90afe5f8-7032-4882-b8c7-6489375421d4" containerID="31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" exitCode=0 Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.066333 4764 generic.go:334] "Generic (PLEG): container finished" podID="90afe5f8-7032-4882-b8c7-6489375421d4" containerID="47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" exitCode=143 Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.067281 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.068234 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerDied","Data":"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311"} Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.068286 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerDied","Data":"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576"} Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.068296 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90afe5f8-7032-4882-b8c7-6489375421d4","Type":"ContainerDied","Data":"9ca93dd44e8c6459b45c1f99596f7b59cb9c6b80732d5750e8d73e63ee666d63"} Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.068313 4764 scope.go:117] "RemoveContainer" containerID="31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.104743 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.170519 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.192482 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.192844 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="init" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.192859 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="init" Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.192876 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api-log" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.192882 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api-log" Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.192890 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="dnsmasq-dns" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.192895 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="dnsmasq-dns" Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.192911 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.192917 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.193098 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api-log" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.193115 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" containerName="cinder-api" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.193123 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f76719-e428-4f69-9885-0d763981b164" containerName="dnsmasq-dns" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.194054 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.198009 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.198263 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.198416 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.213373 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.274485 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.274787 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.274805 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.274843 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.274877 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.275076 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.275172 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.275244 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbp5\" (UniqueName: \"kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.275426 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.292678 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377414 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377503 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377531 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377579 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377627 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377681 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377726 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377772 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377775 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzbp5\" (UniqueName: \"kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.377936 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.397621 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.401233 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.401939 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.402052 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.403340 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.407067 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.407152 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzbp5\" (UniqueName: \"kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.408066 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data\") pod \"cinder-api-0\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.515678 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.685787 4764 scope.go:117] "RemoveContainer" containerID="47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.727763 4764 scope.go:117] "RemoveContainer" containerID="31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.736590 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311\": container with ID starting with 31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311 not found: ID does not exist" containerID="31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.736646 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311"} err="failed to get container status \"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311\": rpc error: code = NotFound desc = could not find container \"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311\": container with ID starting with 31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311 not found: ID does not exist" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.736678 4764 scope.go:117] "RemoveContainer" containerID="47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" Jan 27 07:34:59 crc kubenswrapper[4764]: E0127 07:34:59.750917 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576\": container with ID starting with 47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576 not found: ID does not exist" containerID="47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.750961 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576"} err="failed to get container status \"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576\": rpc error: code = NotFound desc = could not find container \"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576\": container with ID starting with 47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576 not found: ID does not exist" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.750994 4764 scope.go:117] "RemoveContainer" containerID="31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.752979 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311"} err="failed to get container status \"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311\": rpc error: code = NotFound desc = could not find container \"31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311\": container with ID starting with 31b744e42b47c08debfd1d437d5903c498d987e52eff372b73d1967fbfde4311 not found: ID does not exist" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.753020 4764 scope.go:117] "RemoveContainer" containerID="47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576" Jan 27 07:34:59 crc kubenswrapper[4764]: I0127 07:34:59.774958 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576"} err="failed to get container status \"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576\": rpc error: code = NotFound desc = could not find container \"47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576\": container with ID starting with 47b80b687c8a0c07d5b266febc71301c14c64d894222e411cc392b36e3e9d576 not found: ID does not exist" Jan 27 07:35:00 crc kubenswrapper[4764]: I0127 07:35:00.080127 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerStarted","Data":"73576bc4fb0ca2564f6d7fde8e868da89a8479d6873d8f1c26f5c51db67803eb"} Jan 27 07:35:00 crc kubenswrapper[4764]: I0127 07:35:00.226506 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:35:00 crc kubenswrapper[4764]: W0127 07:35:00.234273 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02cf5030_57ab_4086_9849_4607dc4e91b8.slice/crio-9802e891c19c0c8ac861ddf80c0ee137a45deb53acaa3d3dbbcb516bf5101387 WatchSource:0}: Error finding container 9802e891c19c0c8ac861ddf80c0ee137a45deb53acaa3d3dbbcb516bf5101387: Status 404 returned error can't find the container with id 9802e891c19c0c8ac861ddf80c0ee137a45deb53acaa3d3dbbcb516bf5101387 Jan 27 07:35:00 crc kubenswrapper[4764]: I0127 07:35:00.460133 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90afe5f8-7032-4882-b8c7-6489375421d4" path="/var/lib/kubelet/pods/90afe5f8-7032-4882-b8c7-6489375421d4/volumes" Jan 27 07:35:00 crc kubenswrapper[4764]: I0127 07:35:00.902028 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:35:01 crc kubenswrapper[4764]: I0127 07:35:01.113062 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerStarted","Data":"b4123583e9f7337782206d77e832137bf3904980141051de4258efe102aed381"} Jan 27 07:35:01 crc kubenswrapper[4764]: I0127 07:35:01.114576 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerStarted","Data":"c752d046bfbebaae6419ccddb3420b63b0330becaa143dc5a0f8fe4fe56adf8c"} Jan 27 07:35:01 crc kubenswrapper[4764]: I0127 07:35:01.114602 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerStarted","Data":"9802e891c19c0c8ac861ddf80c0ee137a45deb53acaa3d3dbbcb516bf5101387"} Jan 27 07:35:01 crc kubenswrapper[4764]: I0127 07:35:01.256511 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:35:01 crc kubenswrapper[4764]: I0127 07:35:01.760181 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.040053 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.040709 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76bbb58569-zwwt9" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-api" containerID="cri-o://9859784a74b709831b7d92e87105e8bba945a25ce3cf7ead857f595e31fd157a" gracePeriod=30 Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.041538 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76bbb58569-zwwt9" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-httpd" containerID="cri-o://dc060c52c96638446bc4068ce183fd46981dacb69d81dfc78b824c5ca5a8dceb" gracePeriod=30 Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.188775 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerStarted","Data":"e68d3268761a71d086c0cff42a36b2a584e2de73402b81238f526ef6313396d0"} Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.189016 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.216855 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.230170 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.261732 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.264862 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflbv\" (UniqueName: \"kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.265215 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.265415 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.265565 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.265737 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.265904 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.278235 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.324511 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.324479112 podStartE2EDuration="3.324479112s" podCreationTimestamp="2026-01-27 07:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:02.231487774 +0000 UTC m=+1114.827110300" watchObservedRunningTime="2026-01-27 07:35:02.324479112 +0000 UTC m=+1114.920101638" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373198 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373257 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373293 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373332 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflbv\" (UniqueName: \"kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373380 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373413 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.373455 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.390849 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.394501 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.396695 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.406660 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.407404 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.407701 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.437305 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflbv\" (UniqueName: \"kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv\") pod \"neutron-cc7fb5f4f-2d6z5\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.500856 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:35:02 crc kubenswrapper[4764]: I0127 07:35:02.579166 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.125508 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.200406 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerStarted","Data":"199e5b31cc58a7ceebfbf395abf1cbffcc1b5a280fcc05727b88efd9e0c74bc6"} Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.201468 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.203093 4764 generic.go:334] "Generic (PLEG): container finished" podID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerID="dc060c52c96638446bc4068ce183fd46981dacb69d81dfc78b824c5ca5a8dceb" exitCode=0 Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.203135 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerDied","Data":"dc060c52c96638446bc4068ce183fd46981dacb69d81dfc78b824c5ca5a8dceb"} Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.208696 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerStarted","Data":"3266ae2646f5bf4f0c531b68fe9a497a59e6d95f730f12260eb6b5443ac410f9"} Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.242268 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.521496345 podStartE2EDuration="7.242251914s" podCreationTimestamp="2026-01-27 07:34:56 +0000 UTC" firstStartedPulling="2026-01-27 07:34:57.11958623 +0000 UTC m=+1109.715208756" lastFinishedPulling="2026-01-27 07:35:01.840341799 +0000 UTC m=+1114.435964325" observedRunningTime="2026-01-27 07:35:03.22572408 +0000 UTC m=+1115.821346606" watchObservedRunningTime="2026-01-27 07:35:03.242251914 +0000 UTC m=+1115.837874440" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.396190 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5ff9bfcff8-v9nrc" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.486308 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.486578 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon-log" containerID="cri-o://8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411" gracePeriod=30 Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.487073 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" containerID="cri-o://98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4" gracePeriod=30 Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.499354 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.946029 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-76bbb58569-zwwt9" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 27 07:35:03 crc kubenswrapper[4764]: I0127 07:35:03.972054 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.044849 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.224061 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerStarted","Data":"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e"} Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.224105 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerStarted","Data":"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69"} Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.267792 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cc7fb5f4f-2d6z5" podStartSLOduration=2.267773721 podStartE2EDuration="2.267773721s" podCreationTimestamp="2026-01-27 07:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:04.256918796 +0000 UTC m=+1116.852541322" watchObservedRunningTime="2026-01-27 07:35:04.267773721 +0000 UTC m=+1116.863396247" Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.463575 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.540262 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.559753 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="dnsmasq-dns" containerID="cri-o://4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a" gracePeriod=10 Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.612749 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 07:35:04 crc kubenswrapper[4764]: I0127 07:35:04.731417 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.226536 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.301176 4764 generic.go:334] "Generic (PLEG): container finished" podID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerID="4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a" exitCode=0 Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.301287 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" event={"ID":"7ae4de8f-7af4-459b-ab48-6096fbadfe67","Type":"ContainerDied","Data":"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a"} Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.301316 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" event={"ID":"7ae4de8f-7af4-459b-ab48-6096fbadfe67","Type":"ContainerDied","Data":"dbd2e875ceb23a203614b0d3ca661affc9d3cb821f8efb2a238ba651c489aeff"} Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.301343 4764 scope.go:117] "RemoveContainer" containerID="4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.301544 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8pkhm" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.326290 4764 generic.go:334] "Generic (PLEG): container finished" podID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerID="9859784a74b709831b7d92e87105e8bba945a25ce3cf7ead857f595e31fd157a" exitCode=0 Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.326658 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="cinder-scheduler" containerID="cri-o://85921d8c99cdb39f7de2deceef1630a2f92b2c170052347c9831be2d596a2560" gracePeriod=30 Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.327032 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerDied","Data":"9859784a74b709831b7d92e87105e8bba945a25ce3cf7ead857f595e31fd157a"} Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.329377 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="probe" containerID="cri-o://766e5b33afbe167b9f3ae721b773d96abd00a5f5129045a69125c857b6ee60a6" gracePeriod=30 Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.331485 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.358680 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.358746 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxqx\" (UniqueName: \"kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.358851 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.358894 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.358914 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.359013 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config\") pod \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\" (UID: \"7ae4de8f-7af4-459b-ab48-6096fbadfe67\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.365601 4764 scope.go:117] "RemoveContainer" containerID="940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.383005 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx" (OuterVolumeSpecName: "kube-api-access-smxqx") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "kube-api-access-smxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.458582 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.465710 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxqx\" (UniqueName: \"kubernetes.io/projected/7ae4de8f-7af4-459b-ab48-6096fbadfe67-kube-api-access-smxqx\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.465730 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.480743 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.490869 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.503062 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.538264 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config" (OuterVolumeSpecName: "config") pod "7ae4de8f-7af4-459b-ab48-6096fbadfe67" (UID: "7ae4de8f-7af4-459b-ab48-6096fbadfe67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.563632 4764 scope.go:117] "RemoveContainer" containerID="4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a" Jan 27 07:35:05 crc kubenswrapper[4764]: E0127 07:35:05.564116 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a\": container with ID starting with 4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a not found: ID does not exist" containerID="4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.564167 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a"} err="failed to get container status \"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a\": rpc error: code = NotFound desc = could not find container \"4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a\": container with ID starting with 4b03a4220a983bf4eb40aac7b6d6adac14ce2de9445f4de2b284d0fb55643e2a not found: ID does not exist" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.564197 4764 scope.go:117] "RemoveContainer" containerID="940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d" Jan 27 07:35:05 crc kubenswrapper[4764]: E0127 07:35:05.564601 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d\": container with ID starting with 940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d not found: ID does not exist" containerID="940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.564637 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d"} err="failed to get container status \"940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d\": rpc error: code = NotFound desc = could not find container \"940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d\": container with ID starting with 940db24205221023dde7310d07a757ba310fca0ea3fd23c441e2479fe8eb821d not found: ID does not exist" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.566873 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.566908 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.566922 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.566933 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae4de8f-7af4-459b-ab48-6096fbadfe67-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.658652 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.668508 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.670044 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8pkhm"] Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769039 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769105 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769177 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769201 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769320 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.769678 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb4dt\" (UniqueName: \"kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.770087 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs\") pod \"cdd90417-1879-421f-b0a8-04ed0694fb3a\" (UID: \"cdd90417-1879-421f-b0a8-04ed0694fb3a\") " Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.776608 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.778597 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt" (OuterVolumeSpecName: "kube-api-access-rb4dt") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "kube-api-access-rb4dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.841678 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.863656 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config" (OuterVolumeSpecName: "config") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.871811 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb4dt\" (UniqueName: \"kubernetes.io/projected/cdd90417-1879-421f-b0a8-04ed0694fb3a-kube-api-access-rb4dt\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.871851 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.871866 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.871879 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.876555 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.890575 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.893413 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cdd90417-1879-421f-b0a8-04ed0694fb3a" (UID: "cdd90417-1879-421f-b0a8-04ed0694fb3a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.966021 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.973759 4764 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.973804 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:05 crc kubenswrapper[4764]: I0127 07:35:05.973816 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd90417-1879-421f-b0a8-04ed0694fb3a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.338272 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76bbb58569-zwwt9" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.339419 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76bbb58569-zwwt9" event={"ID":"cdd90417-1879-421f-b0a8-04ed0694fb3a","Type":"ContainerDied","Data":"cc31961e5ab13f028fc08e642b653874b203e104f988e2a37fdec2508504eda2"} Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.339716 4764 scope.go:117] "RemoveContainer" containerID="dc060c52c96638446bc4068ce183fd46981dacb69d81dfc78b824c5ca5a8dceb" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.339995 4764 generic.go:334] "Generic (PLEG): container finished" podID="a39ef80b-f486-467e-81bd-38eec01902b7" containerID="766e5b33afbe167b9f3ae721b773d96abd00a5f5129045a69125c857b6ee60a6" exitCode=0 Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.340064 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerDied","Data":"766e5b33afbe167b9f3ae721b773d96abd00a5f5129045a69125c857b6ee60a6"} Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.360513 4764 scope.go:117] "RemoveContainer" containerID="9859784a74b709831b7d92e87105e8bba945a25ce3cf7ead857f595e31fd157a" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.383861 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.391066 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-76bbb58569-zwwt9"] Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.450612 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" path="/var/lib/kubelet/pods/7ae4de8f-7af4-459b-ab48-6096fbadfe67/volumes" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.451367 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" path="/var/lib/kubelet/pods/cdd90417-1879-421f-b0a8-04ed0694fb3a/volumes" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.780375 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.858521 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.858900 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57b65668f8-l79bk" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api-log" containerID="cri-o://5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9" gracePeriod=30 Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.859276 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57b65668f8-l79bk" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api" containerID="cri-o://e43a28bf9f2893daea65f365081c9ba1c9c3cf8b6cb7d3ed556d2cc2b5ea32ae" gracePeriod=30 Jan 27 07:35:06 crc kubenswrapper[4764]: I0127 07:35:06.908817 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:39116->10.217.0.151:8443: read: connection reset by peer" Jan 27 07:35:06 crc kubenswrapper[4764]: E0127 07:35:06.999324 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f40b18c_5d92_40a2_80fe_a7baad40da13.slice/crio-5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f40b18c_5d92_40a2_80fe_a7baad40da13.slice/crio-conmon-5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b44fa92_de90_4956_8425_e184375fddc1.slice/crio-conmon-98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.059861 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.112521 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.353868 4764 generic.go:334] "Generic (PLEG): container finished" podID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerID="5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9" exitCode=143 Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.353936 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerDied","Data":"5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9"} Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.363500 4764 generic.go:334] "Generic (PLEG): container finished" podID="6b44fa92-de90-4956-8425-e184375fddc1" containerID="98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4" exitCode=0 Jan 27 07:35:07 crc kubenswrapper[4764]: I0127 07:35:07.364505 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerDied","Data":"98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4"} Jan 27 07:35:08 crc kubenswrapper[4764]: I0127 07:35:08.600469 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.393416 4764 generic.go:334] "Generic (PLEG): container finished" podID="a39ef80b-f486-467e-81bd-38eec01902b7" containerID="85921d8c99cdb39f7de2deceef1630a2f92b2c170052347c9831be2d596a2560" exitCode=0 Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.393946 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerDied","Data":"85921d8c99cdb39f7de2deceef1630a2f92b2c170052347c9831be2d596a2560"} Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.676374 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.779074 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.779819 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpjdj\" (UniqueName: \"kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.779933 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.779951 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.779974 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.780050 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom\") pod \"a39ef80b-f486-467e-81bd-38eec01902b7\" (UID: \"a39ef80b-f486-467e-81bd-38eec01902b7\") " Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.780118 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.780566 4764 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a39ef80b-f486-467e-81bd-38eec01902b7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.787487 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts" (OuterVolumeSpecName: "scripts") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.789542 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.807762 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj" (OuterVolumeSpecName: "kube-api-access-zpjdj") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "kube-api-access-zpjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.877997 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.882688 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpjdj\" (UniqueName: \"kubernetes.io/projected/a39ef80b-f486-467e-81bd-38eec01902b7-kube-api-access-zpjdj\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.882713 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.882722 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.882732 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.932914 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data" (OuterVolumeSpecName: "config-data") pod "a39ef80b-f486-467e-81bd-38eec01902b7" (UID: "a39ef80b-f486-467e-81bd-38eec01902b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:09 crc kubenswrapper[4764]: I0127 07:35:09.984351 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a39ef80b-f486-467e-81bd-38eec01902b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.406031 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a39ef80b-f486-467e-81bd-38eec01902b7","Type":"ContainerDied","Data":"e2d9b61987cf55c1f03145ebe3849312bba4c6a23c1421dd088c7ec54f10db24"} Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.406459 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.406459 4764 scope.go:117] "RemoveContainer" containerID="766e5b33afbe167b9f3ae721b773d96abd00a5f5129045a69125c857b6ee60a6" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.410104 4764 generic.go:334] "Generic (PLEG): container finished" podID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerID="e43a28bf9f2893daea65f365081c9ba1c9c3cf8b6cb7d3ed556d2cc2b5ea32ae" exitCode=0 Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.410164 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerDied","Data":"e43a28bf9f2893daea65f365081c9ba1c9c3cf8b6cb7d3ed556d2cc2b5ea32ae"} Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.410191 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57b65668f8-l79bk" event={"ID":"0f40b18c-5d92-40a2-80fe-a7baad40da13","Type":"ContainerDied","Data":"6d744f2a87161ee1961da29109e2eacde218ad3fa256a159ca53cd46b17f2eb6"} Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.410204 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d744f2a87161ee1961da29109e2eacde218ad3fa256a159ca53cd46b17f2eb6" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.487658 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.495082 4764 scope.go:117] "RemoveContainer" containerID="85921d8c99cdb39f7de2deceef1630a2f92b2c170052347c9831be2d596a2560" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.601448 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle\") pod \"0f40b18c-5d92-40a2-80fe-a7baad40da13\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.601494 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data\") pod \"0f40b18c-5d92-40a2-80fe-a7baad40da13\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.601525 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs\") pod \"0f40b18c-5d92-40a2-80fe-a7baad40da13\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.601618 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k985k\" (UniqueName: \"kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k\") pod \"0f40b18c-5d92-40a2-80fe-a7baad40da13\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.601703 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom\") pod \"0f40b18c-5d92-40a2-80fe-a7baad40da13\" (UID: \"0f40b18c-5d92-40a2-80fe-a7baad40da13\") " Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.602894 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs" (OuterVolumeSpecName: "logs") pod "0f40b18c-5d92-40a2-80fe-a7baad40da13" (UID: "0f40b18c-5d92-40a2-80fe-a7baad40da13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.606110 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k" (OuterVolumeSpecName: "kube-api-access-k985k") pod "0f40b18c-5d92-40a2-80fe-a7baad40da13" (UID: "0f40b18c-5d92-40a2-80fe-a7baad40da13"). InnerVolumeSpecName "kube-api-access-k985k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.606774 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f40b18c-5d92-40a2-80fe-a7baad40da13" (UID: "0f40b18c-5d92-40a2-80fe-a7baad40da13"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.631257 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f40b18c-5d92-40a2-80fe-a7baad40da13" (UID: "0f40b18c-5d92-40a2-80fe-a7baad40da13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.654492 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data" (OuterVolumeSpecName: "config-data") pod "0f40b18c-5d92-40a2-80fe-a7baad40da13" (UID: "0f40b18c-5d92-40a2-80fe-a7baad40da13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.704217 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.704248 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.704259 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f40b18c-5d92-40a2-80fe-a7baad40da13-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.704268 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f40b18c-5d92-40a2-80fe-a7baad40da13-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:10 crc kubenswrapper[4764]: I0127 07:35:10.704278 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k985k\" (UniqueName: \"kubernetes.io/projected/0f40b18c-5d92-40a2-80fe-a7baad40da13-kube-api-access-k985k\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:11 crc kubenswrapper[4764]: I0127 07:35:11.420827 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57b65668f8-l79bk" Jan 27 07:35:11 crc kubenswrapper[4764]: I0127 07:35:11.485381 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:35:11 crc kubenswrapper[4764]: I0127 07:35:11.496938 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-57b65668f8-l79bk"] Jan 27 07:35:11 crc kubenswrapper[4764]: I0127 07:35:11.504344 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 07:35:12 crc kubenswrapper[4764]: I0127 07:35:12.449535 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" path="/var/lib/kubelet/pods/0f40b18c-5d92-40a2-80fe-a7baad40da13/volumes" Jan 27 07:35:12 crc kubenswrapper[4764]: I0127 07:35:12.453675 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c7589cf-vmvrw" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.776981 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777816 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api-log" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777837 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api-log" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777873 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="init" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777880 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="init" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777891 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="cinder-scheduler" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777900 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="cinder-scheduler" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777919 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-api" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777927 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-api" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777946 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="probe" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777954 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="probe" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777966 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="dnsmasq-dns" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.777973 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="dnsmasq-dns" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.777993 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778000 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api" Jan 27 07:35:15 crc kubenswrapper[4764]: E0127 07:35:15.778025 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-httpd" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778033 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-httpd" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778219 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api-log" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778242 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f40b18c-5d92-40a2-80fe-a7baad40da13" containerName="barbican-api" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778254 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="probe" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778263 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae4de8f-7af4-459b-ab48-6096fbadfe67" containerName="dnsmasq-dns" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778277 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-api" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778300 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" containerName="cinder-scheduler" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.778315 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90417-1879-421f-b0a8-04ed0694fb3a" containerName="neutron-httpd" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.779254 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.782872 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.783430 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.783508 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-m42qx" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.792272 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.797165 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.797228 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.797278 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkv7\" (UniqueName: \"kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.797320 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.900088 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkv7\" (UniqueName: \"kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.900175 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.900357 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.900397 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.901181 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.908177 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.908315 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:15 crc kubenswrapper[4764]: I0127 07:35:15.918289 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkv7\" (UniqueName: \"kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7\") pod \"openstackclient\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.108587 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.215878 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.238787 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.279503 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.281197 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.290348 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.310340 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.310400 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.310508 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87n2t\" (UniqueName: \"kubernetes.io/projected/3bd2fe19-e10d-4784-9823-ad215851bc5a-kube-api-access-87n2t\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.310531 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.412706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.412824 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.412971 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87n2t\" (UniqueName: \"kubernetes.io/projected/3bd2fe19-e10d-4784-9823-ad215851bc5a-kube-api-access-87n2t\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.413926 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.414062 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.418415 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.429236 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2fe19-e10d-4784-9823-ad215851bc5a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.434002 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87n2t\" (UniqueName: \"kubernetes.io/projected/3bd2fe19-e10d-4784-9823-ad215851bc5a-kube-api-access-87n2t\") pod \"openstackclient\" (UID: \"3bd2fe19-e10d-4784-9823-ad215851bc5a\") " pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: I0127 07:35:16.609747 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:16 crc kubenswrapper[4764]: E0127 07:35:16.672732 4764 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 07:35:16 crc kubenswrapper[4764]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_59ae5c6f-76b2-4c65-a8f6-244398895dfd_0(be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe" Netns:"/var/run/netns/c3680d67-1195-48e4-a2d2-a78dff5a1bec" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe;K8S_POD_UID=59ae5c6f-76b2-4c65-a8f6-244398895dfd" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/59ae5c6f-76b2-4c65-a8f6-244398895dfd:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe network default NAD default] [openstack/openstackclient be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe network default NAD default] pod deleted before sandbox ADD operation began Jan 27 07:35:16 crc kubenswrapper[4764]: ' Jan 27 07:35:16 crc kubenswrapper[4764]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 07:35:16 crc kubenswrapper[4764]: > Jan 27 07:35:16 crc kubenswrapper[4764]: E0127 07:35:16.673099 4764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 07:35:16 crc kubenswrapper[4764]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_59ae5c6f-76b2-4c65-a8f6-244398895dfd_0(be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe" Netns:"/var/run/netns/c3680d67-1195-48e4-a2d2-a78dff5a1bec" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe;K8S_POD_UID=59ae5c6f-76b2-4c65-a8f6-244398895dfd" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/59ae5c6f-76b2-4c65-a8f6-244398895dfd:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe network default NAD default] [openstack/openstackclient be96c71088c14c1d633f16b480d8bf5ff8dd1da178862969013e9c9d8a7df1fe network default NAD default] pod deleted before sandbox ADD operation began Jan 27 07:35:16 crc kubenswrapper[4764]: ' Jan 27 07:35:16 crc kubenswrapper[4764]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 07:35:16 crc kubenswrapper[4764]: > pod="openstack/openstackclient" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.056339 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.506269 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.506991 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3bd2fe19-e10d-4784-9823-ad215851bc5a","Type":"ContainerStarted","Data":"e840f58ffd1a6f87de47d348414f762f4e96acc59438e040d4ec88fd574d33cd"} Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.519989 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.524611 4764 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="59ae5c6f-76b2-4c65-a8f6-244398895dfd" podUID="3bd2fe19-e10d-4784-9823-ad215851bc5a" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.641543 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret\") pod \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.641596 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle\") pod \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.641716 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkv7\" (UniqueName: \"kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7\") pod \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.641766 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config\") pod \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\" (UID: \"59ae5c6f-76b2-4c65-a8f6-244398895dfd\") " Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.643315 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "59ae5c6f-76b2-4c65-a8f6-244398895dfd" (UID: "59ae5c6f-76b2-4c65-a8f6-244398895dfd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.657213 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7" (OuterVolumeSpecName: "kube-api-access-dxkv7") pod "59ae5c6f-76b2-4c65-a8f6-244398895dfd" (UID: "59ae5c6f-76b2-4c65-a8f6-244398895dfd"). InnerVolumeSpecName "kube-api-access-dxkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.657371 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59ae5c6f-76b2-4c65-a8f6-244398895dfd" (UID: "59ae5c6f-76b2-4c65-a8f6-244398895dfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.657530 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "59ae5c6f-76b2-4c65-a8f6-244398895dfd" (UID: "59ae5c6f-76b2-4c65-a8f6-244398895dfd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.744245 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkv7\" (UniqueName: \"kubernetes.io/projected/59ae5c6f-76b2-4c65-a8f6-244398895dfd-kube-api-access-dxkv7\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.744276 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.744285 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:17 crc kubenswrapper[4764]: I0127 07:35:17.744295 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ae5c6f-76b2-4c65-a8f6-244398895dfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:18 crc kubenswrapper[4764]: I0127 07:35:18.460926 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ae5c6f-76b2-4c65-a8f6-244398895dfd" path="/var/lib/kubelet/pods/59ae5c6f-76b2-4c65-a8f6-244398895dfd/volumes" Jan 27 07:35:18 crc kubenswrapper[4764]: I0127 07:35:18.513148 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 07:35:18 crc kubenswrapper[4764]: I0127 07:35:18.517167 4764 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="59ae5c6f-76b2-4c65-a8f6-244398895dfd" podUID="3bd2fe19-e10d-4784-9823-ad215851bc5a" Jan 27 07:35:18 crc kubenswrapper[4764]: I0127 07:35:18.520602 4764 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="59ae5c6f-76b2-4c65-a8f6-244398895dfd" podUID="3bd2fe19-e10d-4784-9823-ad215851bc5a" Jan 27 07:35:18 crc kubenswrapper[4764]: I0127 07:35:18.599530 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.712859 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.714864 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.722931 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.723040 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.723105 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.745972 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789325 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789392 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789420 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9jn\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789656 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789819 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789873 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789922 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.789939 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891713 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891761 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891782 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891827 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891855 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891878 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9jn\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891939 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.891992 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.892950 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.893159 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.901045 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.901245 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.901304 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.902720 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.909303 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:19 crc kubenswrapper[4764]: I0127 07:35:19.921418 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9jn\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn\") pod \"swift-proxy-7875b648f9-hbq8d\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:20 crc kubenswrapper[4764]: I0127 07:35:20.049650 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:20 crc kubenswrapper[4764]: I0127 07:35:20.620364 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.492186 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.493588 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-central-agent" containerID="cri-o://2467e178b17cb6ff3a26316eff1553e12bd2ce90d674b1c764c2100b0b50377b" gracePeriod=30 Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.493626 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" containerID="cri-o://199e5b31cc58a7ceebfbf395abf1cbffcc1b5a280fcc05727b88efd9e0c74bc6" gracePeriod=30 Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.493873 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-notification-agent" containerID="cri-o://73576bc4fb0ca2564f6d7fde8e868da89a8479d6873d8f1c26f5c51db67803eb" gracePeriod=30 Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.493953 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="sg-core" containerID="cri-o://b4123583e9f7337782206d77e832137bf3904980141051de4258efe102aed381" gracePeriod=30 Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.513680 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": EOF" Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.562897 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerStarted","Data":"66db63deb4cf08666efd669bb6549053c928ca8b295fa8ffcf41c08f09830091"} Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.563327 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerStarted","Data":"e6f16aea35dced08a8d9b41d8bbd64099b4f1fe033927a503a15bd5a7c9f562e"} Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.563344 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerStarted","Data":"3ff507b5bfcf056cb182c37d214ab857562c38a558f1b381a4660db2a80ed73b"} Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.563391 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.563421 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:21 crc kubenswrapper[4764]: I0127 07:35:21.585646 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7875b648f9-hbq8d" podStartSLOduration=2.585626719 podStartE2EDuration="2.585626719s" podCreationTimestamp="2026-01-27 07:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:21.582268011 +0000 UTC m=+1134.177890537" watchObservedRunningTime="2026-01-27 07:35:21.585626719 +0000 UTC m=+1134.181249245" Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.575705 4764 generic.go:334] "Generic (PLEG): container finished" podID="4639425c-f733-47d7-a852-d79f16dc87e6" containerID="199e5b31cc58a7ceebfbf395abf1cbffcc1b5a280fcc05727b88efd9e0c74bc6" exitCode=0 Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.576137 4764 generic.go:334] "Generic (PLEG): container finished" podID="4639425c-f733-47d7-a852-d79f16dc87e6" containerID="b4123583e9f7337782206d77e832137bf3904980141051de4258efe102aed381" exitCode=2 Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.576148 4764 generic.go:334] "Generic (PLEG): container finished" podID="4639425c-f733-47d7-a852-d79f16dc87e6" containerID="2467e178b17cb6ff3a26316eff1553e12bd2ce90d674b1c764c2100b0b50377b" exitCode=0 Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.575796 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerDied","Data":"199e5b31cc58a7ceebfbf395abf1cbffcc1b5a280fcc05727b88efd9e0c74bc6"} Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.576252 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerDied","Data":"b4123583e9f7337782206d77e832137bf3904980141051de4258efe102aed381"} Jan 27 07:35:22 crc kubenswrapper[4764]: I0127 07:35:22.576264 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerDied","Data":"2467e178b17cb6ff3a26316eff1553e12bd2ce90d674b1c764c2100b0b50377b"} Jan 27 07:35:23 crc kubenswrapper[4764]: I0127 07:35:23.586877 4764 generic.go:334] "Generic (PLEG): container finished" podID="4639425c-f733-47d7-a852-d79f16dc87e6" containerID="73576bc4fb0ca2564f6d7fde8e868da89a8479d6873d8f1c26f5c51db67803eb" exitCode=0 Jan 27 07:35:23 crc kubenswrapper[4764]: I0127 07:35:23.586920 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerDied","Data":"73576bc4fb0ca2564f6d7fde8e868da89a8479d6873d8f1c26f5c51db67803eb"} Jan 27 07:35:25 crc kubenswrapper[4764]: I0127 07:35:25.059649 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:25 crc kubenswrapper[4764]: I0127 07:35:25.637197 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:25 crc kubenswrapper[4764]: I0127 07:35:25.637522 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-log" containerID="cri-o://cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68" gracePeriod=30 Jan 27 07:35:25 crc kubenswrapper[4764]: I0127 07:35:25.637604 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-httpd" containerID="cri-o://28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756" gracePeriod=30 Jan 27 07:35:26 crc kubenswrapper[4764]: I0127 07:35:26.504023 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": dial tcp 10.217.0.169:3000: connect: connection refused" Jan 27 07:35:26 crc kubenswrapper[4764]: I0127 07:35:26.619805 4764 generic.go:334] "Generic (PLEG): container finished" podID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerID="cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68" exitCode=143 Jan 27 07:35:26 crc kubenswrapper[4764]: I0127 07:35:26.619951 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerDied","Data":"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68"} Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.210984 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.277959 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278210 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278404 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278423 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278515 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278710 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278728 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp728\" (UniqueName: \"kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278805 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.278828 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts\") pod \"4639425c-f733-47d7-a852-d79f16dc87e6\" (UID: \"4639425c-f733-47d7-a852-d79f16dc87e6\") " Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.279265 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.279284 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4639425c-f733-47d7-a852-d79f16dc87e6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.285409 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728" (OuterVolumeSpecName: "kube-api-access-fp728") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "kube-api-access-fp728". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.286072 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts" (OuterVolumeSpecName: "scripts") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.311552 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.380607 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.380636 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp728\" (UniqueName: \"kubernetes.io/projected/4639425c-f733-47d7-a852-d79f16dc87e6-kube-api-access-fp728\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.380649 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.380931 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.390380 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data" (OuterVolumeSpecName: "config-data") pod "4639425c-f733-47d7-a852-d79f16dc87e6" (UID: "4639425c-f733-47d7-a852-d79f16dc87e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.484848 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.484883 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4639425c-f733-47d7-a852-d79f16dc87e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.632151 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4639425c-f733-47d7-a852-d79f16dc87e6","Type":"ContainerDied","Data":"9e2386123230d78ab573f8f22c02cdc58fa1969ab62cba414f9847231010108d"} Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.632200 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.632479 4764 scope.go:117] "RemoveContainer" containerID="199e5b31cc58a7ceebfbf395abf1cbffcc1b5a280fcc05727b88efd9e0c74bc6" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.634874 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3bd2fe19-e10d-4784-9823-ad215851bc5a","Type":"ContainerStarted","Data":"27c1bd0ee385df90718d86a6b73b9c7a3926c841514a715668c0b6609234594f"} Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.666190 4764 scope.go:117] "RemoveContainer" containerID="b4123583e9f7337782206d77e832137bf3904980141051de4258efe102aed381" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.666850 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.764533778 podStartE2EDuration="11.666834826s" podCreationTimestamp="2026-01-27 07:35:16 +0000 UTC" firstStartedPulling="2026-01-27 07:35:17.054509313 +0000 UTC m=+1129.650131839" lastFinishedPulling="2026-01-27 07:35:26.956810361 +0000 UTC m=+1139.552432887" observedRunningTime="2026-01-27 07:35:27.656823384 +0000 UTC m=+1140.252445930" watchObservedRunningTime="2026-01-27 07:35:27.666834826 +0000 UTC m=+1140.262457352" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.690818 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.706579 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.710980 4764 scope.go:117] "RemoveContainer" containerID="73576bc4fb0ca2564f6d7fde8e868da89a8479d6873d8f1c26f5c51db67803eb" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.720596 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:27 crc kubenswrapper[4764]: E0127 07:35:27.720986 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-central-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.720998 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-central-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: E0127 07:35:27.721028 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="sg-core" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721035 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="sg-core" Jan 27 07:35:27 crc kubenswrapper[4764]: E0127 07:35:27.721048 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-notification-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721054 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-notification-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: E0127 07:35:27.721063 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721070 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721265 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-central-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721279 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="proxy-httpd" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721292 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="sg-core" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.721305 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" containerName="ceilometer-notification-agent" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.722792 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.727329 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.727582 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.729133 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.737110 4764 scope.go:117] "RemoveContainer" containerID="2467e178b17cb6ff3a26316eff1553e12bd2ce90d674b1c764c2100b0b50377b" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791459 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791556 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791627 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791656 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791788 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791906 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.791949 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th45r\" (UniqueName: \"kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893376 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893426 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th45r\" (UniqueName: \"kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893470 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893491 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893527 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893550 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.893612 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.894247 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.894574 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.898596 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.898746 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.899292 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.904190 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:27 crc kubenswrapper[4764]: I0127 07:35:27.915203 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th45r\" (UniqueName: \"kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r\") pod \"ceilometer-0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " pod="openstack/ceilometer-0" Jan 27 07:35:28 crc kubenswrapper[4764]: I0127 07:35:28.046519 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:28 crc kubenswrapper[4764]: I0127 07:35:28.463533 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4639425c-f733-47d7-a852-d79f16dc87e6" path="/var/lib/kubelet/pods/4639425c-f733-47d7-a852-d79f16dc87e6/volumes" Jan 27 07:35:28 crc kubenswrapper[4764]: I0127 07:35:28.546914 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:28 crc kubenswrapper[4764]: I0127 07:35:28.599563 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68d75bdb9d-z5cr4" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 27 07:35:28 crc kubenswrapper[4764]: I0127 07:35:28.644824 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerStarted","Data":"2868fb12a5fb87580e07aa2ae400d976e82eab92d9db5cd5714e7fe4d7dd117d"} Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.272756 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.327645 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.327960 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-225sz\" (UniqueName: \"kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.327993 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.328033 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.328055 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.328070 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.328100 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.328151 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data\") pod \"4be230ef-8bfc-453c-9653-dcae5c70bee7\" (UID: \"4be230ef-8bfc-453c-9653-dcae5c70bee7\") " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.332966 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs" (OuterVolumeSpecName: "logs") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.333260 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.335051 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.358748 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz" (OuterVolumeSpecName: "kube-api-access-225sz") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "kube-api-access-225sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.360654 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts" (OuterVolumeSpecName: "scripts") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.364613 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.394779 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data" (OuterVolumeSpecName: "config-data") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.395025 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4be230ef-8bfc-453c-9653-dcae5c70bee7" (UID: "4be230ef-8bfc-453c-9653-dcae5c70bee7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.429650 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.429947 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-225sz\" (UniqueName: \"kubernetes.io/projected/4be230ef-8bfc-453c-9653-dcae5c70bee7-kube-api-access-225sz\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430043 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430128 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430195 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430271 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430327 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be230ef-8bfc-453c-9653-dcae5c70bee7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.430381 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be230ef-8bfc-453c-9653-dcae5c70bee7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.451233 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.532721 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.659223 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerStarted","Data":"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f"} Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.662908 4764 generic.go:334] "Generic (PLEG): container finished" podID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerID="28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756" exitCode=0 Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.662968 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerDied","Data":"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756"} Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.663001 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be230ef-8bfc-453c-9653-dcae5c70bee7","Type":"ContainerDied","Data":"6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119"} Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.663023 4764 scope.go:117] "RemoveContainer" containerID="28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.663019 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.701872 4764 scope.go:117] "RemoveContainer" containerID="cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.708603 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.732753 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.735891 4764 scope.go:117] "RemoveContainer" containerID="28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756" Jan 27 07:35:29 crc kubenswrapper[4764]: E0127 07:35:29.740391 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756\": container with ID starting with 28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756 not found: ID does not exist" containerID="28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.740459 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756"} err="failed to get container status \"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756\": rpc error: code = NotFound desc = could not find container \"28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756\": container with ID starting with 28f1a18cbc8e4e74271dab87167e8107430a26493a479c9b758482bdb03de756 not found: ID does not exist" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.740499 4764 scope.go:117] "RemoveContainer" containerID="cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68" Jan 27 07:35:29 crc kubenswrapper[4764]: E0127 07:35:29.741559 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68\": container with ID starting with cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68 not found: ID does not exist" containerID="cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.741600 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68"} err="failed to get container status \"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68\": rpc error: code = NotFound desc = could not find container \"cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68\": container with ID starting with cf2a5195e2481fd8afc38e083fbd2bd05e6dfca1a10b0c34593145d744010d68 not found: ID does not exist" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.748143 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:29 crc kubenswrapper[4764]: E0127 07:35:29.748764 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-log" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.748862 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-log" Jan 27 07:35:29 crc kubenswrapper[4764]: E0127 07:35:29.748915 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-httpd" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.748925 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-httpd" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.749143 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-log" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.749170 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" containerName="glance-httpd" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.750843 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.753978 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.754096 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.766362 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.880547 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941275 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941323 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941385 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941419 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941460 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941504 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941534 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:29 crc kubenswrapper[4764]: I0127 07:35:29.941618 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxq5\" (UniqueName: \"kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043320 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043477 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043526 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043552 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxq5\" (UniqueName: \"kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043627 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043663 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043715 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.043762 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.044030 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.044140 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.044251 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.049289 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.049350 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.064318 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.066026 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.069844 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxq5\" (UniqueName: \"kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.070552 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.108116 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.379678 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.491855 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be230ef-8bfc-453c-9653-dcae5c70bee7" path="/var/lib/kubelet/pods/4be230ef-8bfc-453c-9653-dcae5c70bee7/volumes" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.683420 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerStarted","Data":"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17"} Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.684499 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerStarted","Data":"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806"} Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.933718 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f9c8d5-vhpg6"] Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.936138 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.958683 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8657d6789d-n6r2p"] Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.966922 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:30 crc kubenswrapper[4764]: I0127 07:35:30.976265 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f9c8d5-vhpg6"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.023770 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8657d6789d-n6r2p"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.066598 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075808 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data-custom\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075856 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075889 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgrb\" (UniqueName: \"kubernetes.io/projected/a45db52d-2a92-4743-9a8b-12e623299cd5-kube-api-access-vmgrb\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075927 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-combined-ca-bundle\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075966 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-combined-ca-bundle\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.075983 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data-custom\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.076051 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0cca1ae-6ef7-421a-b481-c4251ff65668-logs\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.076068 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.076093 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a45db52d-2a92-4743-9a8b-12e623299cd5-logs\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.076112 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92qgq\" (UniqueName: \"kubernetes.io/projected/b0cca1ae-6ef7-421a-b481-c4251ff65668-kube-api-access-92qgq\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.095882 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bdbf9956d-hxvm6"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.097359 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.111926 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdbf9956d-hxvm6"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178023 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a45db52d-2a92-4743-9a8b-12e623299cd5-logs\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178075 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92qgq\" (UniqueName: \"kubernetes.io/projected/b0cca1ae-6ef7-421a-b481-c4251ff65668-kube-api-access-92qgq\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178121 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data-custom\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178153 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178189 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmgrb\" (UniqueName: \"kubernetes.io/projected/a45db52d-2a92-4743-9a8b-12e623299cd5-kube-api-access-vmgrb\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178244 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-combined-ca-bundle\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178295 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-combined-ca-bundle\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178322 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data-custom\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178421 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0cca1ae-6ef7-421a-b481-c4251ff65668-logs\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.178466 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.180385 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0cca1ae-6ef7-421a-b481-c4251ff65668-logs\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.180762 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a45db52d-2a92-4743-9a8b-12e623299cd5-logs\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.195897 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.195951 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-combined-ca-bundle\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.196338 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-config-data-custom\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.196802 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92qgq\" (UniqueName: \"kubernetes.io/projected/b0cca1ae-6ef7-421a-b481-c4251ff65668-kube-api-access-92qgq\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.197190 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a45db52d-2a92-4743-9a8b-12e623299cd5-combined-ca-bundle\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.200478 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data-custom\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.201613 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cca1ae-6ef7-421a-b481-c4251ff65668-config-data\") pod \"barbican-worker-f9c8d5-vhpg6\" (UID: \"b0cca1ae-6ef7-421a-b481-c4251ff65668\") " pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.208322 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmgrb\" (UniqueName: \"kubernetes.io/projected/a45db52d-2a92-4743-9a8b-12e623299cd5-kube-api-access-vmgrb\") pod \"barbican-keystone-listener-8657d6789d-n6r2p\" (UID: \"a45db52d-2a92-4743-9a8b-12e623299cd5\") " pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.272842 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9c8d5-vhpg6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.280745 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbblj\" (UniqueName: \"kubernetes.io/projected/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-kube-api-access-hbblj\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.280814 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-public-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.280835 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-logs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.280851 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-internal-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.280901 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data-custom\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.293110 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-combined-ca-bundle\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.293316 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.309602 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395598 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-combined-ca-bundle\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395718 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395815 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbblj\" (UniqueName: \"kubernetes.io/projected/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-kube-api-access-hbblj\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395854 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-public-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395874 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-logs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395893 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-internal-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.395939 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data-custom\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.400151 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-logs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.404315 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-internal-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.410778 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data-custom\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.418618 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-combined-ca-bundle\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.419127 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-public-tls-certs\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.420073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-config-data\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.429256 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbblj\" (UniqueName: \"kubernetes.io/projected/bc9a7b7d-fe4a-4120-a0db-f757fba16ccf-kube-api-access-hbblj\") pod \"barbican-api-6bdbf9956d-hxvm6\" (UID: \"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf\") " pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.467018 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.700517 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerStarted","Data":"2ab0cc3ea79ed64ff9bb293ec9e6bcd0fdb719ae520a3616b641481c2228fc7c"} Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.792813 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f9c8d5-vhpg6"] Jan 27 07:35:31 crc kubenswrapper[4764]: I0127 07:35:31.951703 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8657d6789d-n6r2p"] Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.081964 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdbf9956d-hxvm6"] Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.597357 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.698686 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.699025 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86db767f96-qw88w" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-api" containerID="cri-o://6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.699194 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86db767f96-qw88w" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-httpd" containerID="cri-o://1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.748273 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdbf9956d-hxvm6" event={"ID":"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf","Type":"ContainerStarted","Data":"f51bce6f4e330ed66726bf5052a2f22d9ea7ad8973958e10e86a724886e54859"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.748333 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdbf9956d-hxvm6" event={"ID":"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf","Type":"ContainerStarted","Data":"f4f35c255a91fd2d9a1edebe47f3547c0c3da193d47b88a97db599477ee69e0e"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.748349 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdbf9956d-hxvm6" event={"ID":"bc9a7b7d-fe4a-4120-a0db-f757fba16ccf","Type":"ContainerStarted","Data":"71ef0024228ba3067fc191f814eaaad4409cce76f9b383dcc94292ba00318465"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.748901 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.749084 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.765723 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9c8d5-vhpg6" event={"ID":"b0cca1ae-6ef7-421a-b481-c4251ff65668","Type":"ContainerStarted","Data":"5d0bcf620e45001e9caf29373b65d569c95e73fb7dacf75157455889d35df61c"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.765784 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9c8d5-vhpg6" event={"ID":"b0cca1ae-6ef7-421a-b481-c4251ff65668","Type":"ContainerStarted","Data":"17b9394f5531d0cb851f083a241155048bef696c0ce307ca298981a672539921"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.765796 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9c8d5-vhpg6" event={"ID":"b0cca1ae-6ef7-421a-b481-c4251ff65668","Type":"ContainerStarted","Data":"168390ebe3b921cddfc8d0bc89acf2b1fc572bd8aae786d5e98c72adf201a292"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.781707 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bdbf9956d-hxvm6" podStartSLOduration=2.7816822070000002 podStartE2EDuration="2.781682207s" podCreationTimestamp="2026-01-27 07:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:32.766886779 +0000 UTC m=+1145.362509375" watchObservedRunningTime="2026-01-27 07:35:32.781682207 +0000 UTC m=+1145.377304733" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.802730 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerStarted","Data":"a7e4aebd4b980a684303610ef13089bf908063cf368e7fa7ae3bd51f375804b6"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.824852 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f9c8d5-vhpg6" podStartSLOduration=2.824631043 podStartE2EDuration="2.824631043s" podCreationTimestamp="2026-01-27 07:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:32.790746754 +0000 UTC m=+1145.386369280" watchObservedRunningTime="2026-01-27 07:35:32.824631043 +0000 UTC m=+1145.420253569" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.835726 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerStarted","Data":"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.836294 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-central-agent" containerID="cri-o://d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.836518 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.836907 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="proxy-httpd" containerID="cri-o://2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.837023 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="sg-core" containerID="cri-o://eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.837057 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-notification-agent" containerID="cri-o://cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.839754 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.839994 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57b4cd6565-zcddr" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker-log" containerID="cri-o://912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.840143 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57b4cd6565-zcddr" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker" containerID="cri-o://b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.862670 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" event={"ID":"a45db52d-2a92-4743-9a8b-12e623299cd5","Type":"ContainerStarted","Data":"453e88dfeab53067aa9f0b10ea76c5a96c5d2fa522400793f5d4a1eaa424ef9b"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.862895 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" event={"ID":"a45db52d-2a92-4743-9a8b-12e623299cd5","Type":"ContainerStarted","Data":"622bb94b58c90c61d4b4d103f6d8e7a37d3a65b8d8469a384bcc514e935cb90a"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.862959 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" event={"ID":"a45db52d-2a92-4743-9a8b-12e623299cd5","Type":"ContainerStarted","Data":"ed07af4f9563cb0ca4fb4ac29cb7a211bd2317a610122b1862cbca81d4147570"} Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.881489 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.350397695 podStartE2EDuration="5.881474643s" podCreationTimestamp="2026-01-27 07:35:27 +0000 UTC" firstStartedPulling="2026-01-27 07:35:28.553774579 +0000 UTC m=+1141.149397105" lastFinishedPulling="2026-01-27 07:35:32.084851527 +0000 UTC m=+1144.680474053" observedRunningTime="2026-01-27 07:35:32.878030023 +0000 UTC m=+1145.473652549" watchObservedRunningTime="2026-01-27 07:35:32.881474643 +0000 UTC m=+1145.477097169" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.932804 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8657d6789d-n6r2p" podStartSLOduration=2.932784248 podStartE2EDuration="2.932784248s" podCreationTimestamp="2026-01-27 07:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:32.921957504 +0000 UTC m=+1145.517580030" watchObservedRunningTime="2026-01-27 07:35:32.932784248 +0000 UTC m=+1145.528406774" Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.990040 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.993171 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener-log" containerID="cri-o://4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" gracePeriod=30 Jan 27 07:35:32 crc kubenswrapper[4764]: I0127 07:35:32.993645 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener" containerID="cri-o://44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" gracePeriod=30 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.727861 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.788242 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle\") pod \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.788321 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs\") pod \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.788426 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data\") pod \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.788592 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6gb\" (UniqueName: \"kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb\") pod \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.788624 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom\") pod \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\" (UID: \"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6\") " Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.790162 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs" (OuterVolumeSpecName: "logs") pod "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" (UID: "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.799394 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" (UID: "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.802623 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb" (OuterVolumeSpecName: "kube-api-access-wt6gb") pod "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" (UID: "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6"). InnerVolumeSpecName "kube-api-access-wt6gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.859635 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" (UID: "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.872540 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data" (OuterVolumeSpecName: "config-data") pod "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" (UID: "4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.886957 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerID="1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e" exitCode=0 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.887031 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerDied","Data":"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890088 4764 generic.go:334] "Generic (PLEG): container finished" podID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerID="2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6" exitCode=0 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890111 4764 generic.go:334] "Generic (PLEG): container finished" podID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerID="eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17" exitCode=2 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890118 4764 generic.go:334] "Generic (PLEG): container finished" podID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerID="cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806" exitCode=0 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890155 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerDied","Data":"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890180 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerDied","Data":"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.890190 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerDied","Data":"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.892326 4764 generic.go:334] "Generic (PLEG): container finished" podID="6b44fa92-de90-4956-8425-e184375fddc1" containerID="8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411" exitCode=137 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.892360 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerDied","Data":"8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.894282 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6gb\" (UniqueName: \"kubernetes.io/projected/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-kube-api-access-wt6gb\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.894321 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.894332 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.894342 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.894353 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.895188 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerStarted","Data":"026ef1441aa8e1fc645c5f172ab78e733eb721075991ef3a98cdf2b37bc5a14e"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902300 4764 generic.go:334] "Generic (PLEG): container finished" podID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerID="44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" exitCode=0 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902349 4764 generic.go:334] "Generic (PLEG): container finished" podID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerID="4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" exitCode=143 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902352 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902358 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerDied","Data":"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902457 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerDied","Data":"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902473 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dffd4f5f4-gww9r" event={"ID":"4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6","Type":"ContainerDied","Data":"6e3b9a44c86e75e2ba40203e00c14ba2c5a9fb10fb0ecd07949b70e708873544"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.902494 4764 scope.go:117] "RemoveContainer" containerID="44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.905366 4764 generic.go:334] "Generic (PLEG): container finished" podID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerID="912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877" exitCode=143 Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.906861 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerDied","Data":"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877"} Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.933900 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.933877675 podStartE2EDuration="4.933877675s" podCreationTimestamp="2026-01-27 07:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:33.91881279 +0000 UTC m=+1146.514435316" watchObservedRunningTime="2026-01-27 07:35:33.933877675 +0000 UTC m=+1146.529500201" Jan 27 07:35:33 crc kubenswrapper[4764]: I0127 07:35:33.981487 4764 scope.go:117] "RemoveContainer" containerID="4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:33.998587 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.023049 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-dffd4f5f4-gww9r"] Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.079980 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.088734 4764 scope.go:117] "RemoveContainer" containerID="44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" Jan 27 07:35:34 crc kubenswrapper[4764]: E0127 07:35:34.089207 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc\": container with ID starting with 44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc not found: ID does not exist" containerID="44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089240 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc"} err="failed to get container status \"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc\": rpc error: code = NotFound desc = could not find container \"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc\": container with ID starting with 44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc not found: ID does not exist" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089262 4764 scope.go:117] "RemoveContainer" containerID="4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" Jan 27 07:35:34 crc kubenswrapper[4764]: E0127 07:35:34.089516 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc\": container with ID starting with 4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc not found: ID does not exist" containerID="4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089538 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc"} err="failed to get container status \"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc\": rpc error: code = NotFound desc = could not find container \"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc\": container with ID starting with 4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc not found: ID does not exist" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089550 4764 scope.go:117] "RemoveContainer" containerID="44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089710 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc"} err="failed to get container status \"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc\": rpc error: code = NotFound desc = could not find container \"44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc\": container with ID starting with 44140749bb43c92159704de075358c8a22eeeeafde201ddef24b9bc55b3052dc not found: ID does not exist" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089729 4764 scope.go:117] "RemoveContainer" containerID="4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.089865 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc"} err="failed to get container status \"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc\": rpc error: code = NotFound desc = could not find container \"4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc\": container with ID starting with 4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc not found: ID does not exist" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097514 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l27lz\" (UniqueName: \"kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097557 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097583 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097639 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097663 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097691 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.097779 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs\") pod \"6b44fa92-de90-4956-8425-e184375fddc1\" (UID: \"6b44fa92-de90-4956-8425-e184375fddc1\") " Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.099283 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs" (OuterVolumeSpecName: "logs") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.116741 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz" (OuterVolumeSpecName: "kube-api-access-l27lz") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "kube-api-access-l27lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.120302 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.145909 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data" (OuterVolumeSpecName: "config-data") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.151668 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.192178 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts" (OuterVolumeSpecName: "scripts") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199923 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l27lz\" (UniqueName: \"kubernetes.io/projected/6b44fa92-de90-4956-8425-e184375fddc1-kube-api-access-l27lz\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199950 4764 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199959 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199970 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199978 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b44fa92-de90-4956-8425-e184375fddc1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.199986 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b44fa92-de90-4956-8425-e184375fddc1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.241465 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6b44fa92-de90-4956-8425-e184375fddc1" (UID: "6b44fa92-de90-4956-8425-e184375fddc1"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.301590 4764 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b44fa92-de90-4956-8425-e184375fddc1-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.452881 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" path="/var/lib/kubelet/pods/4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6/volumes" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.831657 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.831953 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-log" containerID="cri-o://5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223" gracePeriod=30 Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.832134 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-httpd" containerID="cri-o://d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c" gracePeriod=30 Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.920900 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68d75bdb9d-z5cr4" event={"ID":"6b44fa92-de90-4956-8425-e184375fddc1","Type":"ContainerDied","Data":"59d79f33e328abf3f1518dd953232239f4beae55ab824c16d478d449226521f1"} Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.920949 4764 scope.go:117] "RemoveContainer" containerID="98a65d9ef8ad339cc7701d7b492d80a604c5a2e55551f85aca3daebb7b8166c4" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.920946 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68d75bdb9d-z5cr4" Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.948217 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:35:34 crc kubenswrapper[4764]: I0127 07:35:34.962003 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68d75bdb9d-z5cr4"] Jan 27 07:35:35 crc kubenswrapper[4764]: I0127 07:35:35.130071 4764 scope.go:117] "RemoveContainer" containerID="8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411" Jan 27 07:35:35 crc kubenswrapper[4764]: I0127 07:35:35.932021 4764 generic.go:334] "Generic (PLEG): container finished" podID="1b8645de-4272-4382-bb78-4ec88cdba698" containerID="5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223" exitCode=143 Jan 27 07:35:35 crc kubenswrapper[4764]: I0127 07:35:35.932102 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerDied","Data":"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223"} Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.117062 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68dd46b99f-78lf2"] Jan 27 07:35:36 crc kubenswrapper[4764]: E0127 07:35:36.117649 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon-log" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.117723 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon-log" Jan 27 07:35:36 crc kubenswrapper[4764]: E0127 07:35:36.117803 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.117860 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" Jan 27 07:35:36 crc kubenswrapper[4764]: E0127 07:35:36.117917 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.117972 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener" Jan 27 07:35:36 crc kubenswrapper[4764]: E0127 07:35:36.118047 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener-log" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.118108 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener-log" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.118324 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.118395 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.118476 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b44fa92-de90-4956-8425-e184375fddc1" containerName="horizon-log" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.118555 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1e2f7e-5a81-4fc3-9de9-a1db7aed31e6" containerName="barbican-keystone-listener-log" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.119540 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.128458 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68dd46b99f-78lf2"] Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144752 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144808 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-ovndb-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144869 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gw8t\" (UniqueName: \"kubernetes.io/projected/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-kube-api-access-5gw8t\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144900 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-public-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144923 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-internal-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144967 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-combined-ca-bundle\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.144986 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-httpd-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.246914 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gw8t\" (UniqueName: \"kubernetes.io/projected/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-kube-api-access-5gw8t\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.246976 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-public-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.247019 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-internal-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.247085 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-combined-ca-bundle\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.247110 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-httpd-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.247185 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.247210 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-ovndb-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.253327 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-ovndb-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.256016 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-combined-ca-bundle\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.256716 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-internal-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.256957 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-httpd-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.257702 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-config\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.269093 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gw8t\" (UniqueName: \"kubernetes.io/projected/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-kube-api-access-5gw8t\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.272481 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4-public-tls-certs\") pod \"neutron-68dd46b99f-78lf2\" (UID: \"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4\") " pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.434942 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:36 crc kubenswrapper[4764]: I0127 07:35:36.449497 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b44fa92-de90-4956-8425-e184375fddc1" path="/var/lib/kubelet/pods/6b44fa92-de90-4956-8425-e184375fddc1/volumes" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.021331 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68dd46b99f-78lf2"] Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.901049 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.948620 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dd46b99f-78lf2" event={"ID":"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4","Type":"ContainerStarted","Data":"26d5bd494ad2328c5367adbbc646f5c45878bce3cf86b7061a6667363c584682"} Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.948662 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dd46b99f-78lf2" event={"ID":"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4","Type":"ContainerStarted","Data":"659f7804d5a227de057d8d23f7226734d5995c6a8bbc41debaf09dbbe315ce32"} Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.948672 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dd46b99f-78lf2" event={"ID":"5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4","Type":"ContainerStarted","Data":"6445db86f88302ae41039b1f5dea70f11d908bb99ea821a13ea2a9462d114f2c"} Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.949694 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.952892 4764 generic.go:334] "Generic (PLEG): container finished" podID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerID="b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4" exitCode=0 Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.952932 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerDied","Data":"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4"} Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.952957 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57b4cd6565-zcddr" event={"ID":"7d3c6d80-40f9-4109-a486-af6c7f42cbf6","Type":"ContainerDied","Data":"357e1717b823573116a5f518b4a8322c5be22c5b8f4ae530f18bb76e1784f86f"} Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.952978 4764 scope.go:117] "RemoveContainer" containerID="b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.953107 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57b4cd6565-zcddr" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.969992 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68dd46b99f-78lf2" podStartSLOduration=1.969972013 podStartE2EDuration="1.969972013s" podCreationTimestamp="2026-01-27 07:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:37.967050247 +0000 UTC m=+1150.562672773" watchObservedRunningTime="2026-01-27 07:35:37.969972013 +0000 UTC m=+1150.565594539" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.979293 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64n55\" (UniqueName: \"kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55\") pod \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.979368 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle\") pod \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.979393 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom\") pod \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.979496 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs\") pod \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.979565 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data\") pod \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\" (UID: \"7d3c6d80-40f9-4109-a486-af6c7f42cbf6\") " Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.981795 4764 scope.go:117] "RemoveContainer" containerID="912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.982200 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs" (OuterVolumeSpecName: "logs") pod "7d3c6d80-40f9-4109-a486-af6c7f42cbf6" (UID: "7d3c6d80-40f9-4109-a486-af6c7f42cbf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.985450 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55" (OuterVolumeSpecName: "kube-api-access-64n55") pod "7d3c6d80-40f9-4109-a486-af6c7f42cbf6" (UID: "7d3c6d80-40f9-4109-a486-af6c7f42cbf6"). InnerVolumeSpecName "kube-api-access-64n55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:37 crc kubenswrapper[4764]: I0127 07:35:37.991482 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d3c6d80-40f9-4109-a486-af6c7f42cbf6" (UID: "7d3c6d80-40f9-4109-a486-af6c7f42cbf6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.006662 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:57430->10.217.0.157:9292: read: connection reset by peer" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.007005 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:57438->10.217.0.157:9292: read: connection reset by peer" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.024136 4764 scope.go:117] "RemoveContainer" containerID="b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4" Jan 27 07:35:38 crc kubenswrapper[4764]: E0127 07:35:38.026365 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4\": container with ID starting with b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4 not found: ID does not exist" containerID="b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.026390 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4"} err="failed to get container status \"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4\": rpc error: code = NotFound desc = could not find container \"b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4\": container with ID starting with b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4 not found: ID does not exist" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.026408 4764 scope.go:117] "RemoveContainer" containerID="912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877" Jan 27 07:35:38 crc kubenswrapper[4764]: E0127 07:35:38.030997 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877\": container with ID starting with 912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877 not found: ID does not exist" containerID="912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.031044 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877"} err="failed to get container status \"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877\": rpc error: code = NotFound desc = could not find container \"912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877\": container with ID starting with 912275e85493c401e161b8a8463cb648e115de2095ab3731f465dd6448a16877 not found: ID does not exist" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.047592 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3c6d80-40f9-4109-a486-af6c7f42cbf6" (UID: "7d3c6d80-40f9-4109-a486-af6c7f42cbf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.068190 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data" (OuterVolumeSpecName: "config-data") pod "7d3c6d80-40f9-4109-a486-af6c7f42cbf6" (UID: "7d3c6d80-40f9-4109-a486-af6c7f42cbf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.081843 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.082138 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.082148 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.082160 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64n55\" (UniqueName: \"kubernetes.io/projected/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-kube-api-access-64n55\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.082171 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c6d80-40f9-4109-a486-af6c7f42cbf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.340643 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.362783 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-57b4cd6565-zcddr"] Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.518698 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" path="/var/lib/kubelet/pods/7d3c6d80-40f9-4109-a486-af6c7f42cbf6/volumes" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.765537 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925089 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925239 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925333 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bsml\" (UniqueName: \"kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925369 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925404 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925427 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925470 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.925535 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs\") pod \"1b8645de-4272-4382-bb78-4ec88cdba698\" (UID: \"1b8645de-4272-4382-bb78-4ec88cdba698\") " Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.927027 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs" (OuterVolumeSpecName: "logs") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.933080 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.956422 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.960590 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts" (OuterVolumeSpecName: "scripts") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:38 crc kubenswrapper[4764]: I0127 07:35:38.977495 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml" (OuterVolumeSpecName: "kube-api-access-5bsml") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "kube-api-access-5bsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.000073 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.023736 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.028323 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038305 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038532 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bsml\" (UniqueName: \"kubernetes.io/projected/1b8645de-4272-4382-bb78-4ec88cdba698-kube-api-access-5bsml\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038595 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038650 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b8645de-4272-4382-bb78-4ec88cdba698-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038730 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.038788 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.040346 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data" (OuterVolumeSpecName: "config-data") pod "1b8645de-4272-4382-bb78-4ec88cdba698" (UID: "1b8645de-4272-4382-bb78-4ec88cdba698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.086547 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.108661 4764 generic.go:334] "Generic (PLEG): container finished" podID="1b8645de-4272-4382-bb78-4ec88cdba698" containerID="d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c" exitCode=0 Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.108931 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerDied","Data":"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c"} Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.109002 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1b8645de-4272-4382-bb78-4ec88cdba698","Type":"ContainerDied","Data":"23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7"} Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.109027 4764 scope.go:117] "RemoveContainer" containerID="d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.109562 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.127693 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.139972 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs98\" (UniqueName: \"kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98\") pod \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140049 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config\") pod \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140139 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle\") pod \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140160 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config\") pod \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140298 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs\") pod \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\" (UID: \"0d9c8e92-873f-4623-b6ab-4bc09eacaefd\") " Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140652 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.140669 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8645de-4272-4382-bb78-4ec88cdba698-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.151334 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98" (OuterVolumeSpecName: "kube-api-access-vvs98") pod "0d9c8e92-873f-4623-b6ab-4bc09eacaefd" (UID: "0d9c8e92-873f-4623-b6ab-4bc09eacaefd"). InnerVolumeSpecName "kube-api-access-vvs98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.151774 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerID="6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4" exitCode=0 Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.155069 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86db767f96-qw88w" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.159275 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerDied","Data":"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4"} Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.171524 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86db767f96-qw88w" event={"ID":"0d9c8e92-873f-4623-b6ab-4bc09eacaefd","Type":"ContainerDied","Data":"044a16bf61578c312ee93c2d450782feac072c87d9f968715be8511ca3d87ae4"} Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.193643 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0d9c8e92-873f-4623-b6ab-4bc09eacaefd" (UID: "0d9c8e92-873f-4623-b6ab-4bc09eacaefd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.201254 4764 scope.go:117] "RemoveContainer" containerID="5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.230161 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.235460 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config" (OuterVolumeSpecName: "config") pod "0d9c8e92-873f-4623-b6ab-4bc09eacaefd" (UID: "0d9c8e92-873f-4623-b6ab-4bc09eacaefd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.247945 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.248721 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs98\" (UniqueName: \"kubernetes.io/projected/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-kube-api-access-vvs98\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.248752 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.248782 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.260971 4764 scope.go:117] "RemoveContainer" containerID="d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.261952 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c\": container with ID starting with d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c not found: ID does not exist" containerID="d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.262084 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c"} err="failed to get container status \"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c\": rpc error: code = NotFound desc = could not find container \"d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c\": container with ID starting with d39bed02f5b0c5603cd773be7a0c5d54a4b28948fd4945ba788db99468b4b43c not found: ID does not exist" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.262116 4764 scope.go:117] "RemoveContainer" containerID="5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.265137 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223\": container with ID starting with 5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223 not found: ID does not exist" containerID="5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.265171 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223"} err="failed to get container status \"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223\": rpc error: code = NotFound desc = could not find container \"5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223\": container with ID starting with 5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223 not found: ID does not exist" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.265225 4764 scope.go:117] "RemoveContainer" containerID="1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.278703 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279176 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279273 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279331 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker-log" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279385 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker-log" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279460 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279512 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279571 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-api" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279628 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-api" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279679 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-log" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279733 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-log" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.279825 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.279991 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280215 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-api" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280274 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280333 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" containerName="neutron-httpd" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280391 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280500 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3c6d80-40f9-4109-a486-af6c7f42cbf6" containerName="barbican-worker-log" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.280579 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" containerName="glance-log" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.281788 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.291349 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.292934 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.309581 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.352987 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.353179 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.353248 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.353287 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.353382 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdpm\" (UniqueName: \"kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.353411 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.354049 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.354301 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.360249 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d9c8e92-873f-4623-b6ab-4bc09eacaefd" (UID: "0d9c8e92-873f-4623-b6ab-4bc09eacaefd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.363640 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0d9c8e92-873f-4623-b6ab-4bc09eacaefd" (UID: "0d9c8e92-873f-4623-b6ab-4bc09eacaefd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.438864 4764 scope.go:117] "RemoveContainer" containerID="6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.455968 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdpm\" (UniqueName: \"kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456018 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456069 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456157 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456207 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456298 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456332 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456364 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456458 4764 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456475 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d9c8e92-873f-4623-b6ab-4bc09eacaefd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.456924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.457143 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.458038 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.464068 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.475817 4764 scope.go:117] "RemoveContainer" containerID="1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.476673 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e\": container with ID starting with 1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e not found: ID does not exist" containerID="1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.476716 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e"} err="failed to get container status \"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e\": rpc error: code = NotFound desc = could not find container \"1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e\": container with ID starting with 1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e not found: ID does not exist" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.476747 4764 scope.go:117] "RemoveContainer" containerID="6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.488052 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.489180 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: E0127 07:35:39.489288 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4\": container with ID starting with 6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4 not found: ID does not exist" containerID="6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.489378 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4"} err="failed to get container status \"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4\": rpc error: code = NotFound desc = could not find container \"6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4\": container with ID starting with 6fbf411ba6ecc8ac588d032364dbbf9463db4dae9de4604891a057954c0e48d4 not found: ID does not exist" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.499585 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.503118 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdpm\" (UniqueName: \"kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.525167 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " pod="openstack/glance-default-external-api-0" Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.574688 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.585613 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86db767f96-qw88w"] Jan 27 07:35:39 crc kubenswrapper[4764]: I0127 07:35:39.737322 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:35:40 crc kubenswrapper[4764]: E0127 07:35:40.118224 4764 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f1ac772058f20f595683ec7489232f4d405b650fcefa76d8a882d84689b5c581/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f1ac772058f20f595683ec7489232f4d405b650fcefa76d8a882d84689b5c581/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-external-api-0_1b8645de-4272-4382-bb78-4ec88cdba698/glance-log/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-external-api-0_1b8645de-4272-4382-bb78-4ec88cdba698/glance-log/0.log: no such file or directory Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.319676 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.384033 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.384111 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.428632 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.458958 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9c8e92-873f-4623-b6ab-4bc09eacaefd" path="/var/lib/kubelet/pods/0d9c8e92-873f-4623-b6ab-4bc09eacaefd/volumes" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.459609 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8645de-4272-4382-bb78-4ec88cdba698" path="/var/lib/kubelet/pods/1b8645de-4272-4382-bb78-4ec88cdba698/volumes" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.460371 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:40 crc kubenswrapper[4764]: I0127 07:35:40.477512 4764 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda39ef80b-f486-467e-81bd-38eec01902b7"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda39ef80b-f486-467e-81bd-38eec01902b7] : Timed out while waiting for systemd to remove kubepods-besteffort-poda39ef80b_f486_467e_81bd_38eec01902b7.slice" Jan 27 07:35:40 crc kubenswrapper[4764]: E0127 07:35:40.477578 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda39ef80b-f486-467e-81bd-38eec01902b7] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda39ef80b-f486-467e-81bd-38eec01902b7] : Timed out while waiting for systemd to remove kubepods-besteffort-poda39ef80b_f486_467e_81bd_38eec01902b7.slice" pod="openstack/cinder-scheduler-0" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" Jan 27 07:35:40 crc kubenswrapper[4764]: E0127 07:35:40.867099 4764 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7b31062f0782509a8ee0ff26fa54866ae30e0980614afd931e046038b2c3a410/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7b31062f0782509a8ee0ff26fa54866ae30e0980614afd931e046038b2c3a410/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-external-api-0_1b8645de-4272-4382-bb78-4ec88cdba698/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-external-api-0_1b8645de-4272-4382-bb78-4ec88cdba698/glance-httpd/0.log: no such file or directory Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.170529 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerStarted","Data":"98b15996955f0a4b084d8985ca3abf37d536aaa4b65951b48d6389d80e0a2136"} Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.170801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerStarted","Data":"57a68ad665ea330f898508756f994b31f45a33bacbcd599532c19305aaedd487"} Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.170574 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.170960 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.170977 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.214479 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.247516 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.267190 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.269092 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.271890 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.273832 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295543 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295618 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295670 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295700 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295796 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.295824 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: E0127 07:35:41.344279 4764 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fbd966ab2b6aef3cd02af13e7ae2b4f398ac5db4b987f3b923be422c1524fa36/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fbd966ab2b6aef3cd02af13e7ae2b4f398ac5db4b987f3b923be422c1524fa36/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-internal-api-0_4be230ef-8bfc-453c-9653-dcae5c70bee7/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-internal-api-0_4be230ef-8bfc-453c-9653-dcae5c70bee7/glance-httpd/0.log: no such file or directory Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397229 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397300 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397331 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397468 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397490 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.397539 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.398936 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.404387 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.405904 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.407208 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.419125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.427266 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb\") pod \"cinder-scheduler-0\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " pod="openstack/cinder-scheduler-0" Jan 27 07:35:41 crc kubenswrapper[4764]: I0127 07:35:41.647649 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:35:42 crc kubenswrapper[4764]: I0127 07:35:42.177772 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:42 crc kubenswrapper[4764]: I0127 07:35:42.180418 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerStarted","Data":"60859cb06a8049a0cbb7c321949a0e33a989903faa5abd9ba419f94362da2e30"} Jan 27 07:35:42 crc kubenswrapper[4764]: I0127 07:35:42.216361 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.216326394 podStartE2EDuration="3.216326394s" podCreationTimestamp="2026-01-27 07:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:42.199783011 +0000 UTC m=+1154.795405547" watchObservedRunningTime="2026-01-27 07:35:42.216326394 +0000 UTC m=+1154.811948920" Jan 27 07:35:42 crc kubenswrapper[4764]: I0127 07:35:42.454903 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39ef80b-f486-467e-81bd-38eec01902b7" path="/var/lib/kubelet/pods/a39ef80b-f486-467e-81bd-38eec01902b7/volumes" Jan 27 07:35:42 crc kubenswrapper[4764]: E0127 07:35:42.584587 4764 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be230ef_8bfc_453c_9653_dcae5c70bee7.slice/crio-6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119: Error finding container 6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119: Status 404 returned error can't find the container with id 6ebff075a2ae097f39275abe1f7300e00fe2f9f2583e83626f4fdcc0d4c72119 Jan 27 07:35:42 crc kubenswrapper[4764]: W0127 07:35:42.588379 4764 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-conmon-2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-conmon-2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6.scope: no such file or directory Jan 27 07:35:42 crc kubenswrapper[4764]: W0127 07:35:42.597707 4764 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6.scope: no such file or directory Jan 27 07:35:42 crc kubenswrapper[4764]: E0127 07:35:42.835411 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-conmon-d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9c8e92_873f_4623_b6ab_4bc09eacaefd.slice/crio-conmon-1c97ce4d57c1183d8a1810a0d670b685aaad455a409ad730757c93e3ffa8bb9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1e2f7e_5a81_4fc3_9de9_a1db7aed31e6.slice/crio-6e3b9a44c86e75e2ba40203e00c14ba2c5a9fb10fb0ecd07949b70e708873544\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b44fa92_de90_4956_8425_e184375fddc1.slice/crio-conmon-8b8b9b70418dd3efcb6afb423d022b9e64927b83f9a4d510ee0fb8ed56b77411.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9c8e92_873f_4623_b6ab_4bc09eacaefd.slice/crio-044a16bf61578c312ee93c2d450782feac072c87d9f968715be8511ca3d87ae4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8645de_4272_4382_bb78_4ec88cdba698.slice/crio-5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9c8e92_873f_4623_b6ab_4bc09eacaefd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1e2f7e_5a81_4fc3_9de9_a1db7aed31e6.slice/crio-conmon-4f695fa4d81f7a87b213825874cbb882525df529866c614dc7cf11ac2e5677bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3c6d80_40f9_4109_a486_af6c7f42cbf6.slice/crio-conmon-b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8645de_4272_4382_bb78_4ec88cdba698.slice/crio-conmon-5f9add8483dffef27b8f020388b29f2f9cf8ac6180320242a45a51648e4fd223.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0ff04b_e918_4ddc_8004_715fd011adb0.slice/crio-d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3c6d80_40f9_4109_a486_af6c7f42cbf6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3c6d80_40f9_4109_a486_af6c7f42cbf6.slice/crio-b455a02f327f13dff77b7d6b6c1350e685eed82eec19cae0057783a9d7b404d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8645de_4272_4382_bb78_4ec88cdba698.slice/crio-23b01e464ae5a984a2ca17cc3178cb8c42b8996461a435b11a054460dcd673e7\": RecentStats: unable to find data in memory cache]" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.023994 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136114 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136445 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136470 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136587 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136649 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136722 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th45r\" (UniqueName: \"kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136762 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data\") pod \"5e0ff04b-e918-4ddc-8004-715fd011adb0\" (UID: \"5e0ff04b-e918-4ddc-8004-715fd011adb0\") " Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.136951 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.137478 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.137561 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.143849 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts" (OuterVolumeSpecName: "scripts") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.144126 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r" (OuterVolumeSpecName: "kube-api-access-th45r") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "kube-api-access-th45r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.169564 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.219599 4764 generic.go:334] "Generic (PLEG): container finished" podID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerID="d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f" exitCode=0 Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.219683 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerDied","Data":"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f"} Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.219720 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e0ff04b-e918-4ddc-8004-715fd011adb0","Type":"ContainerDied","Data":"2868fb12a5fb87580e07aa2ae400d976e82eab92d9db5cd5714e7fe4d7dd117d"} Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.219743 4764 scope.go:117] "RemoveContainer" containerID="2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.219913 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.231062 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.237195 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerStarted","Data":"70dd79d19231a2c2805407713e3787c15f1480c5049897dfb3b4b1ac100f398e"} Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.237239 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerStarted","Data":"07ed2908e8a51a25f1230e03b2dc1c8606fe330c4ba6d62a6c0cf255d4098d0a"} Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.239486 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.239520 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th45r\" (UniqueName: \"kubernetes.io/projected/5e0ff04b-e918-4ddc-8004-715fd011adb0-kube-api-access-th45r\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.239532 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.239542 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.239552 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e0ff04b-e918-4ddc-8004-715fd011adb0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.257584 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data" (OuterVolumeSpecName: "config-data") pod "5e0ff04b-e918-4ddc-8004-715fd011adb0" (UID: "5e0ff04b-e918-4ddc-8004-715fd011adb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.265421 4764 scope.go:117] "RemoveContainer" containerID="eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.325928 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.341625 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0ff04b-e918-4ddc-8004-715fd011adb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.371925 4764 scope.go:117] "RemoveContainer" containerID="cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.414258 4764 scope.go:117] "RemoveContainer" containerID="d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.443848 4764 scope.go:117] "RemoveContainer" containerID="2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.444525 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6\": container with ID starting with 2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6 not found: ID does not exist" containerID="2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.444573 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6"} err="failed to get container status \"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6\": rpc error: code = NotFound desc = could not find container \"2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6\": container with ID starting with 2f09664f54d0844c313175e5133f2f0453c8f0bdb3fed412af7aa4f5832d29a6 not found: ID does not exist" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.444600 4764 scope.go:117] "RemoveContainer" containerID="eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.445388 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17\": container with ID starting with eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17 not found: ID does not exist" containerID="eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.445426 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17"} err="failed to get container status \"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17\": rpc error: code = NotFound desc = could not find container \"eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17\": container with ID starting with eae2712b855ed74a8cee674d41194127a846e7d62b5addf79487578dda4d7a17 not found: ID does not exist" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.445472 4764 scope.go:117] "RemoveContainer" containerID="cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.445882 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806\": container with ID starting with cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806 not found: ID does not exist" containerID="cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.445907 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806"} err="failed to get container status \"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806\": rpc error: code = NotFound desc = could not find container \"cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806\": container with ID starting with cc6e3dfbfa3c48d690bc43cbf38eb8ea91d67b412d610a94a6677f386722b806 not found: ID does not exist" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.445921 4764 scope.go:117] "RemoveContainer" containerID="d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.446184 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f\": container with ID starting with d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f not found: ID does not exist" containerID="d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.446208 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f"} err="failed to get container status \"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f\": rpc error: code = NotFound desc = could not find container \"d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f\": container with ID starting with d2668e4e2d20925bf213bf52509e67f401054133dba197a0495640421cd3357f not found: ID does not exist" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.464134 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdbf9956d-hxvm6" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.573759 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.573978 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-846df5dc9d-clqgc" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api-log" containerID="cri-o://ab2ec021a42ef7058513fbd77a0e52f702b82f4f545c319a39f029bb37028471" gracePeriod=30 Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.574475 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-846df5dc9d-clqgc" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api" containerID="cri-o://bf330d9035fae199f54da317068caaf0e877021e58a08135ffd390f53a3a9bcd" gracePeriod=30 Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.645527 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.668224 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.709242 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.709863 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-notification-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.709880 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-notification-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.709897 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="proxy-httpd" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.709903 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="proxy-httpd" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.709918 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-central-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.709925 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-central-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: E0127 07:35:43.709933 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="sg-core" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.709939 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="sg-core" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.710087 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-notification-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.710106 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="ceilometer-central-agent" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.710120 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="proxy-httpd" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.710129 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" containerName="sg-core" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.711579 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.716935 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.717090 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.748524 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859509 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859576 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859647 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859678 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859706 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859782 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.859838 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxns\" (UniqueName: \"kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961515 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961612 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxns\" (UniqueName: \"kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961655 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961687 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961747 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961772 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.961801 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.964390 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.965904 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.970042 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.970314 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.970775 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.972306 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:43 crc kubenswrapper[4764]: I0127 07:35:43.990241 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxns\" (UniqueName: \"kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns\") pod \"ceilometer-0\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " pod="openstack/ceilometer-0" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.099747 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.114491 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.114618 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.283358 4764 generic.go:334] "Generic (PLEG): container finished" podID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerID="ab2ec021a42ef7058513fbd77a0e52f702b82f4f545c319a39f029bb37028471" exitCode=143 Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.283603 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerDied","Data":"ab2ec021a42ef7058513fbd77a0e52f702b82f4f545c319a39f029bb37028471"} Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.321365 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerStarted","Data":"e05443e395b1729ee946218afc1586e2e6404fa803cbcc4bb93a53c1162aec11"} Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.340420 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.340401783 podStartE2EDuration="3.340401783s" podCreationTimestamp="2026-01-27 07:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:44.33952227 +0000 UTC m=+1156.935144796" watchObservedRunningTime="2026-01-27 07:35:44.340401783 +0000 UTC m=+1156.936024309" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.466636 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0ff04b-e918-4ddc-8004-715fd011adb0" path="/var/lib/kubelet/pods/5e0ff04b-e918-4ddc-8004-715fd011adb0/volumes" Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.617821 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:44 crc kubenswrapper[4764]: W0127 07:35:44.623736 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56d9b1bd_fc6c_422d_b89c_c29172c721b8.slice/crio-452aad7520af5ffc7b0b4db05cf0d43a003a9a99a8ef5aee57a9d7139c7ae433 WatchSource:0}: Error finding container 452aad7520af5ffc7b0b4db05cf0d43a003a9a99a8ef5aee57a9d7139c7ae433: Status 404 returned error can't find the container with id 452aad7520af5ffc7b0b4db05cf0d43a003a9a99a8ef5aee57a9d7139c7ae433 Jan 27 07:35:44 crc kubenswrapper[4764]: I0127 07:35:44.705726 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:35:45 crc kubenswrapper[4764]: I0127 07:35:45.326780 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerStarted","Data":"452aad7520af5ffc7b0b4db05cf0d43a003a9a99a8ef5aee57a9d7139c7ae433"} Jan 27 07:35:45 crc kubenswrapper[4764]: I0127 07:35:45.849292 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:35:46 crc kubenswrapper[4764]: I0127 07:35:46.338097 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerStarted","Data":"f6f02c6c5daa83a2bc9a244e2f2d29ce5d2eecc5852e5b7b2312f92c5454f566"} Jan 27 07:35:46 crc kubenswrapper[4764]: I0127 07:35:46.648658 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.021928 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-846df5dc9d-clqgc" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:40652->10.217.0.165:9311: read: connection reset by peer" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.023407 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-846df5dc9d-clqgc" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:40668->10.217.0.165:9311: read: connection reset by peer" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.348046 4764 generic.go:334] "Generic (PLEG): container finished" podID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerID="bf330d9035fae199f54da317068caaf0e877021e58a08135ffd390f53a3a9bcd" exitCode=0 Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.348300 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerDied","Data":"bf330d9035fae199f54da317068caaf0e877021e58a08135ffd390f53a3a9bcd"} Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.351613 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerStarted","Data":"7a63c6f08b0a17d06645cb7cf22ea4a2b888d2d6a9e477cbf1e6be4a2f580873"} Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.351653 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerStarted","Data":"2ab38bb2b6166de89e09888e133f4b6735b6676edcb75d9878b26d272825b7c6"} Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.560583 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.644936 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645045 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645086 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645174 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645240 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645284 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645362 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom\") pod \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\" (UID: \"d5476bb4-c464-49cd-acb2-1cae6acc8bea\") " Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.645896 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs" (OuterVolumeSpecName: "logs") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.646028 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5476bb4-c464-49cd-acb2-1cae6acc8bea-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.653569 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.668295 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk" (OuterVolumeSpecName: "kube-api-access-zzpwk") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "kube-api-access-zzpwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.692564 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.702288 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.712723 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data" (OuterVolumeSpecName: "config-data") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.719972 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5476bb4-c464-49cd-acb2-1cae6acc8bea" (UID: "d5476bb4-c464-49cd-acb2-1cae6acc8bea"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747828 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/d5476bb4-c464-49cd-acb2-1cae6acc8bea-kube-api-access-zzpwk\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747863 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747878 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747889 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747899 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:47 crc kubenswrapper[4764]: I0127 07:35:47.747909 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5476bb4-c464-49cd-acb2-1cae6acc8bea-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.365625 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-846df5dc9d-clqgc" event={"ID":"d5476bb4-c464-49cd-acb2-1cae6acc8bea","Type":"ContainerDied","Data":"e274b26aac78830b0e6df787b4a0361d57f28eb63aa29aeea226aef360e3be5e"} Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.365686 4764 scope.go:117] "RemoveContainer" containerID="bf330d9035fae199f54da317068caaf0e877021e58a08135ffd390f53a3a9bcd" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.365868 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-846df5dc9d-clqgc" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.389477 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5547d9cbf4-x8lh6"] Jan 27 07:35:48 crc kubenswrapper[4764]: E0127 07:35:48.390587 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api-log" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.390618 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api-log" Jan 27 07:35:48 crc kubenswrapper[4764]: E0127 07:35:48.390644 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.390653 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.390865 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api-log" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.390899 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" containerName="barbican-api" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.392021 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.404764 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5547d9cbf4-x8lh6"] Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.406146 4764 scope.go:117] "RemoveContainer" containerID="ab2ec021a42ef7058513fbd77a0e52f702b82f4f545c319a39f029bb37028471" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464032 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-config-data\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464163 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-logs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464220 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc4m\" (UniqueName: \"kubernetes.io/projected/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-kube-api-access-ptc4m\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464286 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-internal-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464336 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-scripts\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464382 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-combined-ca-bundle\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.464420 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-public-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.480171 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.492495 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-846df5dc9d-clqgc"] Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.571903 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-public-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.572706 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-config-data\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.572806 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-logs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.572862 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptc4m\" (UniqueName: \"kubernetes.io/projected/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-kube-api-access-ptc4m\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.572959 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-internal-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.573014 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-scripts\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.573069 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-combined-ca-bundle\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.574539 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-logs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.576890 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-public-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.578164 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-config-data\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.581131 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-combined-ca-bundle\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.581841 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-internal-tls-certs\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.583044 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-scripts\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.595683 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptc4m\" (UniqueName: \"kubernetes.io/projected/a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e-kube-api-access-ptc4m\") pod \"placement-5547d9cbf4-x8lh6\" (UID: \"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e\") " pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.729023 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.894214 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-db5d878f-pf9rf"] Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.896707 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.904799 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-db5d878f-pf9rf"] Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.982940 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-etc-swift\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983264 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-combined-ca-bundle\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983624 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-log-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983731 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-public-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983786 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-run-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983816 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2vx\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-kube-api-access-7s2vx\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983861 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-internal-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:48 crc kubenswrapper[4764]: I0127 07:35:48.983963 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-config-data\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.087512 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-etc-swift\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.088522 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-combined-ca-bundle\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.088820 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-log-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.089530 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-public-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.089639 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-run-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.089671 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2vx\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-kube-api-access-7s2vx\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.089750 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-internal-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.089836 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-config-data\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.091220 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-run-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.094191 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69437692-e8cb-4991-a2de-1434f68c7201-log-httpd\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.094874 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-etc-swift\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.096705 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-config-data\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.099312 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-internal-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.100122 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-public-tls-certs\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.101407 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69437692-e8cb-4991-a2de-1434f68c7201-combined-ca-bundle\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.108983 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2vx\" (UniqueName: \"kubernetes.io/projected/69437692-e8cb-4991-a2de-1434f68c7201-kube-api-access-7s2vx\") pod \"swift-proxy-db5d878f-pf9rf\" (UID: \"69437692-e8cb-4991-a2de-1434f68c7201\") " pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.229735 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.298477 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5547d9cbf4-x8lh6"] Jan 27 07:35:49 crc kubenswrapper[4764]: W0127 07:35:49.299738 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b2e17e_3e8f_4ed5_b15d_aba3fb54713e.slice/crio-c46aeefff8e928c9031a1e7c968aae2f4b83c132d2a7e9d511215d722ba013c5 WatchSource:0}: Error finding container c46aeefff8e928c9031a1e7c968aae2f4b83c132d2a7e9d511215d722ba013c5: Status 404 returned error can't find the container with id c46aeefff8e928c9031a1e7c968aae2f4b83c132d2a7e9d511215d722ba013c5 Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.383895 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5547d9cbf4-x8lh6" event={"ID":"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e","Type":"ContainerStarted","Data":"c46aeefff8e928c9031a1e7c968aae2f4b83c132d2a7e9d511215d722ba013c5"} Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.394870 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerStarted","Data":"9d040d5b3b7b562d924869e3eeee2c1ecb2d5b661255e4374e1785b94519fcd6"} Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.395056 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-central-agent" containerID="cri-o://f6f02c6c5daa83a2bc9a244e2f2d29ce5d2eecc5852e5b7b2312f92c5454f566" gracePeriod=30 Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.395195 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.395275 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="proxy-httpd" containerID="cri-o://9d040d5b3b7b562d924869e3eeee2c1ecb2d5b661255e4374e1785b94519fcd6" gracePeriod=30 Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.395325 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="sg-core" containerID="cri-o://2ab38bb2b6166de89e09888e133f4b6735b6676edcb75d9878b26d272825b7c6" gracePeriod=30 Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.395357 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-notification-agent" containerID="cri-o://7a63c6f08b0a17d06645cb7cf22ea4a2b888d2d6a9e477cbf1e6be4a2f580873" gracePeriod=30 Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.418872 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.715721181 podStartE2EDuration="6.418831179s" podCreationTimestamp="2026-01-27 07:35:43 +0000 UTC" firstStartedPulling="2026-01-27 07:35:44.626511144 +0000 UTC m=+1157.222133670" lastFinishedPulling="2026-01-27 07:35:48.329621142 +0000 UTC m=+1160.925243668" observedRunningTime="2026-01-27 07:35:49.416653092 +0000 UTC m=+1162.012275618" watchObservedRunningTime="2026-01-27 07:35:49.418831179 +0000 UTC m=+1162.014453705" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.643661 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-h5f8x"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.645086 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.671987 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h5f8x"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.738111 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.738162 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.747566 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7c88d"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.749079 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.762991 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7c88d"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.801465 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.805809 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhb7q\" (UniqueName: \"kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.806125 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.808330 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.850793 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-db5d878f-pf9rf"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.896222 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-817e-account-create-update-8qzmw"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.899679 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.902104 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.921077 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rmd8x"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.922151 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.930545 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.934480 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.934793 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-817e-account-create-update-8qzmw"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.935293 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.935958 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqqdf\" (UniqueName: \"kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.936423 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhb7q\" (UniqueName: \"kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.948617 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rmd8x"] Jan 27 07:35:49 crc kubenswrapper[4764]: I0127 07:35:49.962603 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhb7q\" (UniqueName: \"kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q\") pod \"nova-api-db-create-h5f8x\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.040406 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vlh\" (UniqueName: \"kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.040618 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbrv\" (UniqueName: \"kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.040729 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqqdf\" (UniqueName: \"kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.040933 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.041015 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.041136 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.041874 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.053138 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2465-account-create-update-5r9v9"] Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.054189 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.059399 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.061109 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.064681 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqqdf\" (UniqueName: \"kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf\") pod \"nova-cell0-db-create-7c88d\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.067619 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2465-account-create-update-5r9v9"] Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.082985 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142588 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142662 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142701 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142747 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4vlh\" (UniqueName: \"kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142839 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxbrv\" (UniqueName: \"kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.142939 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfr9\" (UniqueName: \"kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.147236 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bb43-account-create-update-4mxhh"] Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.149637 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.152178 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.152184 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.152756 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.165868 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4vlh\" (UniqueName: \"kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh\") pod \"nova-api-817e-account-create-update-8qzmw\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.172596 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxbrv\" (UniqueName: \"kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv\") pod \"nova-cell1-db-create-rmd8x\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.184084 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb43-account-create-update-4mxhh"] Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.245493 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.246874 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfr9\" (UniqueName: \"kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.247380 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.247701 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6z4\" (UniqueName: \"kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.248857 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.256933 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.272583 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfr9\" (UniqueName: \"kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9\") pod \"nova-cell0-2465-account-create-update-5r9v9\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.278026 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.349871 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp6z4\" (UniqueName: \"kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.349951 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.350810 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.374508 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp6z4\" (UniqueName: \"kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4\") pod \"nova-cell1-bb43-account-create-update-4mxhh\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.392893 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.472629 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.492764 4764 generic.go:334] "Generic (PLEG): container finished" podID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerID="9d040d5b3b7b562d924869e3eeee2c1ecb2d5b661255e4374e1785b94519fcd6" exitCode=0 Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.493407 4764 generic.go:334] "Generic (PLEG): container finished" podID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerID="2ab38bb2b6166de89e09888e133f4b6735b6676edcb75d9878b26d272825b7c6" exitCode=2 Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.493420 4764 generic.go:334] "Generic (PLEG): container finished" podID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerID="7a63c6f08b0a17d06645cb7cf22ea4a2b888d2d6a9e477cbf1e6be4a2f580873" exitCode=0 Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495021 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5476bb4-c464-49cd-acb2-1cae6acc8bea" path="/var/lib/kubelet/pods/d5476bb4-c464-49cd-acb2-1cae6acc8bea/volumes" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495759 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5547d9cbf4-x8lh6" event={"ID":"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e","Type":"ContainerStarted","Data":"c5ddc93a43147c7eb04032ffd817084e0845cf17d3c4dc0667da1ff3a4d21a28"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495794 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5547d9cbf4-x8lh6" event={"ID":"a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e","Type":"ContainerStarted","Data":"51b115660ef71b89f8a1483504fc9ad879c5d3d225e8738df9edce16c93acd31"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495815 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495827 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495837 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerDied","Data":"9d040d5b3b7b562d924869e3eeee2c1ecb2d5b661255e4374e1785b94519fcd6"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495852 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerDied","Data":"2ab38bb2b6166de89e09888e133f4b6735b6676edcb75d9878b26d272825b7c6"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.495864 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerDied","Data":"7a63c6f08b0a17d06645cb7cf22ea4a2b888d2d6a9e477cbf1e6be4a2f580873"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504822 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db5d878f-pf9rf" event={"ID":"69437692-e8cb-4991-a2de-1434f68c7201","Type":"ContainerStarted","Data":"62f39bb2e5c60650984fb04295cc662502de7fd21cdaea92409e1b71437414ee"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504866 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db5d878f-pf9rf" event={"ID":"69437692-e8cb-4991-a2de-1434f68c7201","Type":"ContainerStarted","Data":"f418b3a464a754c4f86a596abe9234be4539d3ea65aa55881e8af82e6b8e0247"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504881 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db5d878f-pf9rf" event={"ID":"69437692-e8cb-4991-a2de-1434f68c7201","Type":"ContainerStarted","Data":"542b7b62421f16fe14bc03c1a62163eee3d7037fcdc6bcc9829672f8f887f0fa"} Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504894 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504906 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504934 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.504950 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.544284 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5547d9cbf4-x8lh6" podStartSLOduration=2.544265825 podStartE2EDuration="2.544265825s" podCreationTimestamp="2026-01-27 07:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:50.51814703 +0000 UTC m=+1163.113769556" watchObservedRunningTime="2026-01-27 07:35:50.544265825 +0000 UTC m=+1163.139888351" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.584365 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-db5d878f-pf9rf" podStartSLOduration=2.584343156 podStartE2EDuration="2.584343156s" podCreationTimestamp="2026-01-27 07:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:50.568528121 +0000 UTC m=+1163.164150647" watchObservedRunningTime="2026-01-27 07:35:50.584343156 +0000 UTC m=+1163.179965682" Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.641217 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h5f8x"] Jan 27 07:35:50 crc kubenswrapper[4764]: W0127 07:35:50.663667 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea81fb26_d728_412a_a742_7c589114ce99.slice/crio-73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959 WatchSource:0}: Error finding container 73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959: Status 404 returned error can't find the container with id 73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959 Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.702564 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-817e-account-create-update-8qzmw"] Jan 27 07:35:50 crc kubenswrapper[4764]: I0127 07:35:50.710591 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7c88d"] Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.076716 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rmd8x"] Jan 27 07:35:51 crc kubenswrapper[4764]: W0127 07:35:51.082005 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550fe574_e06c_47c1_89a7_8e4e356c5601.slice/crio-12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1 WatchSource:0}: Error finding container 12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1: Status 404 returned error can't find the container with id 12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1 Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.109213 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2465-account-create-update-5r9v9"] Jan 27 07:35:51 crc kubenswrapper[4764]: W0127 07:35:51.117490 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe92c208_84c9_417a_9a1b_857fc9d3e8fd.slice/crio-e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1 WatchSource:0}: Error finding container e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1: Status 404 returned error can't find the container with id e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1 Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.346281 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb43-account-create-update-4mxhh"] Jan 27 07:35:51 crc kubenswrapper[4764]: W0127 07:35:51.347868 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc24ec6ac_1010_43f6_ada1_8aca2e6ebdb1.slice/crio-41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b WatchSource:0}: Error finding container 41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b: Status 404 returned error can't find the container with id 41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.516582 4764 generic.go:334] "Generic (PLEG): container finished" podID="37677746-23d4-4650-bdc6-7dfe211b54d7" containerID="f31ac71fece61dc24e009efc3b0e27cbd7e41b087b94aa01e6abbaebb2ea849a" exitCode=0 Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.516749 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c88d" event={"ID":"37677746-23d4-4650-bdc6-7dfe211b54d7","Type":"ContainerDied","Data":"f31ac71fece61dc24e009efc3b0e27cbd7e41b087b94aa01e6abbaebb2ea849a"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.516930 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c88d" event={"ID":"37677746-23d4-4650-bdc6-7dfe211b54d7","Type":"ContainerStarted","Data":"21ea7f7bd63a710d1c8bb3a9de0434a91dfc66684ce241669079e9a5ed44e034"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.526421 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rmd8x" event={"ID":"550fe574-e06c-47c1-89a7-8e4e356c5601","Type":"ContainerStarted","Data":"3b7df24d4dd85d91b68b23275463d229caee8743e38cce4bc92bde110c2d9a0b"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.526478 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rmd8x" event={"ID":"550fe574-e06c-47c1-89a7-8e4e356c5601","Type":"ContainerStarted","Data":"12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.533041 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" event={"ID":"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1","Type":"ContainerStarted","Data":"41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.539815 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" event={"ID":"be92c208-84c9-417a-9a1b-857fc9d3e8fd","Type":"ContainerStarted","Data":"8b34dc3388d15a76af579da4c3f2960d3945d3156b0bf5700ebb78ce8a57c5b9"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.539864 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" event={"ID":"be92c208-84c9-417a-9a1b-857fc9d3e8fd","Type":"ContainerStarted","Data":"e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.543226 4764 generic.go:334] "Generic (PLEG): container finished" podID="ea81fb26-d728-412a-a742-7c589114ce99" containerID="87a7268565b7afb6603a2216312cf17f56282e1d904f5f91bae7087a4734993b" exitCode=0 Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.543280 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h5f8x" event={"ID":"ea81fb26-d728-412a-a742-7c589114ce99","Type":"ContainerDied","Data":"87a7268565b7afb6603a2216312cf17f56282e1d904f5f91bae7087a4734993b"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.543303 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h5f8x" event={"ID":"ea81fb26-d728-412a-a742-7c589114ce99","Type":"ContainerStarted","Data":"73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.544331 4764 generic.go:334] "Generic (PLEG): container finished" podID="fb2fed93-c0e7-48ee-9623-6e931a46122e" containerID="d89b4aed5fd83ad493e28436f3e3483e3901d068c3e76f8be76e760e0a870065" exitCode=0 Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.544521 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-817e-account-create-update-8qzmw" event={"ID":"fb2fed93-c0e7-48ee-9623-6e931a46122e","Type":"ContainerDied","Data":"d89b4aed5fd83ad493e28436f3e3483e3901d068c3e76f8be76e760e0a870065"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.544634 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-817e-account-create-update-8qzmw" event={"ID":"fb2fed93-c0e7-48ee-9623-6e931a46122e","Type":"ContainerStarted","Data":"28d6c42c2ebe6418380af58eb680bf76aecb5272f4203230151227cc68c648f7"} Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.571128 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-rmd8x" podStartSLOduration=2.571109717 podStartE2EDuration="2.571109717s" podCreationTimestamp="2026-01-27 07:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:51.565699955 +0000 UTC m=+1164.161322481" watchObservedRunningTime="2026-01-27 07:35:51.571109717 +0000 UTC m=+1164.166732243" Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.643977 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" podStartSLOduration=1.6439505570000001 podStartE2EDuration="1.643950557s" podCreationTimestamp="2026-01-27 07:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:35:51.588085932 +0000 UTC m=+1164.183708468" watchObservedRunningTime="2026-01-27 07:35:51.643950557 +0000 UTC m=+1164.239573083" Jan 27 07:35:51 crc kubenswrapper[4764]: I0127 07:35:51.928346 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.553982 4764 generic.go:334] "Generic (PLEG): container finished" podID="550fe574-e06c-47c1-89a7-8e4e356c5601" containerID="3b7df24d4dd85d91b68b23275463d229caee8743e38cce4bc92bde110c2d9a0b" exitCode=0 Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.554088 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rmd8x" event={"ID":"550fe574-e06c-47c1-89a7-8e4e356c5601","Type":"ContainerDied","Data":"3b7df24d4dd85d91b68b23275463d229caee8743e38cce4bc92bde110c2d9a0b"} Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.556777 4764 generic.go:334] "Generic (PLEG): container finished" podID="c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" containerID="71ac7ae19b8f42fa0cfb537bc872ef5a1eed60621f3f71bc06104a40509731e9" exitCode=0 Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.556830 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" event={"ID":"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1","Type":"ContainerDied","Data":"71ac7ae19b8f42fa0cfb537bc872ef5a1eed60621f3f71bc06104a40509731e9"} Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.558537 4764 generic.go:334] "Generic (PLEG): container finished" podID="be92c208-84c9-417a-9a1b-857fc9d3e8fd" containerID="8b34dc3388d15a76af579da4c3f2960d3945d3156b0bf5700ebb78ce8a57c5b9" exitCode=0 Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.558630 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.558643 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:35:52 crc kubenswrapper[4764]: I0127 07:35:52.558946 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" event={"ID":"be92c208-84c9-417a-9a1b-857fc9d3e8fd","Type":"ContainerDied","Data":"8b34dc3388d15a76af579da4c3f2960d3945d3156b0bf5700ebb78ce8a57c5b9"} Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.191090 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.214188 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.224430 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.360134 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.382328 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts\") pod \"ea81fb26-d728-412a-a742-7c589114ce99\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.382468 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhb7q\" (UniqueName: \"kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q\") pod \"ea81fb26-d728-412a-a742-7c589114ce99\" (UID: \"ea81fb26-d728-412a-a742-7c589114ce99\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.383162 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea81fb26-d728-412a-a742-7c589114ce99" (UID: "ea81fb26-d728-412a-a742-7c589114ce99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.388302 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.401826 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q" (OuterVolumeSpecName: "kube-api-access-dhb7q") pod "ea81fb26-d728-412a-a742-7c589114ce99" (UID: "ea81fb26-d728-412a-a742-7c589114ce99"). InnerVolumeSpecName "kube-api-access-dhb7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484086 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4vlh\" (UniqueName: \"kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh\") pod \"fb2fed93-c0e7-48ee-9623-6e931a46122e\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484200 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts\") pod \"37677746-23d4-4650-bdc6-7dfe211b54d7\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484333 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts\") pod \"fb2fed93-c0e7-48ee-9623-6e931a46122e\" (UID: \"fb2fed93-c0e7-48ee-9623-6e931a46122e\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484412 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqqdf\" (UniqueName: \"kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf\") pod \"37677746-23d4-4650-bdc6-7dfe211b54d7\" (UID: \"37677746-23d4-4650-bdc6-7dfe211b54d7\") " Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484830 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea81fb26-d728-412a-a742-7c589114ce99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484853 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhb7q\" (UniqueName: \"kubernetes.io/projected/ea81fb26-d728-412a-a742-7c589114ce99-kube-api-access-dhb7q\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484885 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37677746-23d4-4650-bdc6-7dfe211b54d7" (UID: "37677746-23d4-4650-bdc6-7dfe211b54d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.484958 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb2fed93-c0e7-48ee-9623-6e931a46122e" (UID: "fb2fed93-c0e7-48ee-9623-6e931a46122e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.486852 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh" (OuterVolumeSpecName: "kube-api-access-v4vlh") pod "fb2fed93-c0e7-48ee-9623-6e931a46122e" (UID: "fb2fed93-c0e7-48ee-9623-6e931a46122e"). InnerVolumeSpecName "kube-api-access-v4vlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.487855 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf" (OuterVolumeSpecName: "kube-api-access-pqqdf") pod "37677746-23d4-4650-bdc6-7dfe211b54d7" (UID: "37677746-23d4-4650-bdc6-7dfe211b54d7"). InnerVolumeSpecName "kube-api-access-pqqdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.568789 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h5f8x" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.571511 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h5f8x" event={"ID":"ea81fb26-d728-412a-a742-7c589114ce99","Type":"ContainerDied","Data":"73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959"} Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.571568 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73517cae6e14e173e5a3333f9231b409e1d8cfe015d5825dfd1501228dabc959" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.579685 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-817e-account-create-update-8qzmw" event={"ID":"fb2fed93-c0e7-48ee-9623-6e931a46122e","Type":"ContainerDied","Data":"28d6c42c2ebe6418380af58eb680bf76aecb5272f4203230151227cc68c648f7"} Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.579716 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d6c42c2ebe6418380af58eb680bf76aecb5272f4203230151227cc68c648f7" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.579781 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-817e-account-create-update-8qzmw" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.581182 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c88d" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.581387 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c88d" event={"ID":"37677746-23d4-4650-bdc6-7dfe211b54d7","Type":"ContainerDied","Data":"21ea7f7bd63a710d1c8bb3a9de0434a91dfc66684ce241669079e9a5ed44e034"} Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.581405 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ea7f7bd63a710d1c8bb3a9de0434a91dfc66684ce241669079e9a5ed44e034" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.586909 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2fed93-c0e7-48ee-9623-6e931a46122e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.586931 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqqdf\" (UniqueName: \"kubernetes.io/projected/37677746-23d4-4650-bdc6-7dfe211b54d7-kube-api-access-pqqdf\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.586943 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4vlh\" (UniqueName: \"kubernetes.io/projected/fb2fed93-c0e7-48ee-9623-6e931a46122e-kube-api-access-v4vlh\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.586952 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37677746-23d4-4650-bdc6-7dfe211b54d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:53 crc kubenswrapper[4764]: I0127 07:35:53.986757 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.096254 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts\") pod \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.096488 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmfr9\" (UniqueName: \"kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9\") pod \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\" (UID: \"be92c208-84c9-417a-9a1b-857fc9d3e8fd\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.099025 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be92c208-84c9-417a-9a1b-857fc9d3e8fd" (UID: "be92c208-84c9-417a-9a1b-857fc9d3e8fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.102584 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9" (OuterVolumeSpecName: "kube-api-access-pmfr9") pod "be92c208-84c9-417a-9a1b-857fc9d3e8fd" (UID: "be92c208-84c9-417a-9a1b-857fc9d3e8fd"). InnerVolumeSpecName "kube-api-access-pmfr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.197770 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.198291 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmfr9\" (UniqueName: \"kubernetes.io/projected/be92c208-84c9-417a-9a1b-857fc9d3e8fd-kube-api-access-pmfr9\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.198316 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be92c208-84c9-417a-9a1b-857fc9d3e8fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.203126 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.299580 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts\") pod \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.299972 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxbrv\" (UniqueName: \"kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv\") pod \"550fe574-e06c-47c1-89a7-8e4e356c5601\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.300021 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp6z4\" (UniqueName: \"kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4\") pod \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\" (UID: \"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.300073 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts\") pod \"550fe574-e06c-47c1-89a7-8e4e356c5601\" (UID: \"550fe574-e06c-47c1-89a7-8e4e356c5601\") " Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.300312 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" (UID: "c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.300718 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "550fe574-e06c-47c1-89a7-8e4e356c5601" (UID: "550fe574-e06c-47c1-89a7-8e4e356c5601"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.301235 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550fe574-e06c-47c1-89a7-8e4e356c5601-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.301272 4764 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.304599 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4" (OuterVolumeSpecName: "kube-api-access-jp6z4") pod "c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" (UID: "c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1"). InnerVolumeSpecName "kube-api-access-jp6z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.310571 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv" (OuterVolumeSpecName: "kube-api-access-wxbrv") pod "550fe574-e06c-47c1-89a7-8e4e356c5601" (UID: "550fe574-e06c-47c1-89a7-8e4e356c5601"). InnerVolumeSpecName "kube-api-access-wxbrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.403697 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxbrv\" (UniqueName: \"kubernetes.io/projected/550fe574-e06c-47c1-89a7-8e4e356c5601-kube-api-access-wxbrv\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.403737 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp6z4\" (UniqueName: \"kubernetes.io/projected/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1-kube-api-access-jp6z4\") on node \"crc\" DevicePath \"\"" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.590247 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rmd8x" event={"ID":"550fe574-e06c-47c1-89a7-8e4e356c5601","Type":"ContainerDied","Data":"12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1"} Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.590285 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c2bcc3853bdb7bf993cccaf7b06c790186611274bb797d1fd4e655ee5f1df1" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.590300 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rmd8x" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.591588 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" event={"ID":"c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1","Type":"ContainerDied","Data":"41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b"} Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.591636 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d25fbde071d6c66f62a9d5154581af094c1256dfa53904ade1be01cc29761b" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.591703 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb43-account-create-update-4mxhh" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.594361 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.594903 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2465-account-create-update-5r9v9" event={"ID":"be92c208-84c9-417a-9a1b-857fc9d3e8fd","Type":"ContainerDied","Data":"e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1"} Jan 27 07:35:54 crc kubenswrapper[4764]: I0127 07:35:54.594938 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87b77048db03c1712a1a8013fce2b497b1cc45a272643c63d0344ae2662c0b1" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.201823 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r44mw"] Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202321 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37677746-23d4-4650-bdc6-7dfe211b54d7" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202342 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="37677746-23d4-4650-bdc6-7dfe211b54d7" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202357 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be92c208-84c9-417a-9a1b-857fc9d3e8fd" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202366 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="be92c208-84c9-417a-9a1b-857fc9d3e8fd" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202380 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea81fb26-d728-412a-a742-7c589114ce99" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202388 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea81fb26-d728-412a-a742-7c589114ce99" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202404 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202412 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202432 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550fe574-e06c-47c1-89a7-8e4e356c5601" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202458 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="550fe574-e06c-47c1-89a7-8e4e356c5601" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: E0127 07:35:55.202476 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2fed93-c0e7-48ee-9623-6e931a46122e" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202486 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2fed93-c0e7-48ee-9623-6e931a46122e" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202727 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2fed93-c0e7-48ee-9623-6e931a46122e" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202740 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202759 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="37677746-23d4-4650-bdc6-7dfe211b54d7" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202778 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea81fb26-d728-412a-a742-7c589114ce99" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202795 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="be92c208-84c9-417a-9a1b-857fc9d3e8fd" containerName="mariadb-account-create-update" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.202834 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="550fe574-e06c-47c1-89a7-8e4e356c5601" containerName="mariadb-database-create" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.203629 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.207413 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.208156 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x9p6r" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.208401 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.214103 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r44mw"] Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.320471 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.320826 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtfs\" (UniqueName: \"kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.320908 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.320966 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.422843 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.422929 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.423012 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.423044 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtfs\" (UniqueName: \"kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.428178 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.428322 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.428909 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.443549 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtfs\" (UniqueName: \"kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs\") pod \"nova-cell0-conductor-db-sync-r44mw\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.522110 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.603646 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.609410 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-log" containerID="cri-o://98b15996955f0a4b084d8985ca3abf37d536aaa4b65951b48d6389d80e0a2136" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.610549 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-httpd" containerID="cri-o://60859cb06a8049a0cbb7c321949a0e33a989903faa5abd9ba419f94362da2e30" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.682580 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.682961 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="cinder-scheduler" containerID="cri-o://70dd79d19231a2c2805407713e3787c15f1480c5049897dfb3b4b1ac100f398e" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.685210 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="probe" containerID="cri-o://e05443e395b1729ee946218afc1586e2e6404fa803cbcc4bb93a53c1162aec11" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.720408 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.720969 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-log" containerID="cri-o://a7e4aebd4b980a684303610ef13089bf908063cf368e7fa7ae3bd51f375804b6" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.721362 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-httpd" containerID="cri-o://026ef1441aa8e1fc645c5f172ab78e733eb721075991ef3a98cdf2b37bc5a14e" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.765890 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.766114 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api-log" containerID="cri-o://c752d046bfbebaae6419ccddb3420b63b0330becaa143dc5a0f8fe4fe56adf8c" gracePeriod=30 Jan 27 07:35:55 crc kubenswrapper[4764]: I0127 07:35:55.766466 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" containerID="cri-o://e68d3268761a71d086c0cff42a36b2a584e2de73402b81238f526ef6313396d0" gracePeriod=30 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.278301 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r44mw"] Jan 27 07:35:56 crc kubenswrapper[4764]: W0127 07:35:56.289790 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd369b40_e130_41c4_bc59_216bb4e60d7c.slice/crio-a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062 WatchSource:0}: Error finding container a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062: Status 404 returned error can't find the container with id a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.620814 4764 generic.go:334] "Generic (PLEG): container finished" podID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerID="98b15996955f0a4b084d8985ca3abf37d536aaa4b65951b48d6389d80e0a2136" exitCode=143 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.620880 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerDied","Data":"98b15996955f0a4b084d8985ca3abf37d536aaa4b65951b48d6389d80e0a2136"} Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.625806 4764 generic.go:334] "Generic (PLEG): container finished" podID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerID="c752d046bfbebaae6419ccddb3420b63b0330becaa143dc5a0f8fe4fe56adf8c" exitCode=143 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.625856 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerDied","Data":"c752d046bfbebaae6419ccddb3420b63b0330becaa143dc5a0f8fe4fe56adf8c"} Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.628305 4764 generic.go:334] "Generic (PLEG): container finished" podID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerID="e05443e395b1729ee946218afc1586e2e6404fa803cbcc4bb93a53c1162aec11" exitCode=0 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.628369 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerDied","Data":"e05443e395b1729ee946218afc1586e2e6404fa803cbcc4bb93a53c1162aec11"} Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.631265 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r44mw" event={"ID":"cd369b40-e130-41c4-bc59-216bb4e60d7c","Type":"ContainerStarted","Data":"a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062"} Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.633982 4764 generic.go:334] "Generic (PLEG): container finished" podID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerID="a7e4aebd4b980a684303610ef13089bf908063cf368e7fa7ae3bd51f375804b6" exitCode=143 Jan 27 07:35:56 crc kubenswrapper[4764]: I0127 07:35:56.634021 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerDied","Data":"a7e4aebd4b980a684303610ef13089bf908063cf368e7fa7ae3bd51f375804b6"} Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.235373 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.247138 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-db5d878f-pf9rf" Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.334985 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.336197 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7875b648f9-hbq8d" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-server" containerID="cri-o://66db63deb4cf08666efd669bb6549053c928ca8b295fa8ffcf41c08f09830091" gracePeriod=30 Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.336679 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7875b648f9-hbq8d" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-httpd" containerID="cri-o://e6f16aea35dced08a8d9b41d8bbd64099b4f1fe033927a503a15bd5a7c9f562e" gracePeriod=30 Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.516351 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": dial tcp 10.217.0.170:8776: connect: connection refused" Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.667489 4764 generic.go:334] "Generic (PLEG): container finished" podID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerID="e6f16aea35dced08a8d9b41d8bbd64099b4f1fe033927a503a15bd5a7c9f562e" exitCode=0 Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.667699 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerDied","Data":"e6f16aea35dced08a8d9b41d8bbd64099b4f1fe033927a503a15bd5a7c9f562e"} Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.671192 4764 generic.go:334] "Generic (PLEG): container finished" podID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerID="026ef1441aa8e1fc645c5f172ab78e733eb721075991ef3a98cdf2b37bc5a14e" exitCode=0 Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.671274 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerDied","Data":"026ef1441aa8e1fc645c5f172ab78e733eb721075991ef3a98cdf2b37bc5a14e"} Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.673468 4764 generic.go:334] "Generic (PLEG): container finished" podID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerID="e68d3268761a71d086c0cff42a36b2a584e2de73402b81238f526ef6313396d0" exitCode=0 Jan 27 07:35:59 crc kubenswrapper[4764]: I0127 07:35:59.673570 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerDied","Data":"e68d3268761a71d086c0cff42a36b2a584e2de73402b81238f526ef6313396d0"} Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.050626 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7875b648f9-hbq8d" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": dial tcp 10.217.0.174:8080: connect: connection refused" Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.050889 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7875b648f9-hbq8d" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": dial tcp 10.217.0.174:8080: connect: connection refused" Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.382745 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.176:9292/healthcheck\": dial tcp 10.217.0.176:9292: connect: connection refused" Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.382809 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9292/healthcheck\": dial tcp 10.217.0.176:9292: connect: connection refused" Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.690235 4764 generic.go:334] "Generic (PLEG): container finished" podID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerID="60859cb06a8049a0cbb7c321949a0e33a989903faa5abd9ba419f94362da2e30" exitCode=0 Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.690320 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerDied","Data":"60859cb06a8049a0cbb7c321949a0e33a989903faa5abd9ba419f94362da2e30"} Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.698765 4764 generic.go:334] "Generic (PLEG): container finished" podID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerID="f6f02c6c5daa83a2bc9a244e2f2d29ce5d2eecc5852e5b7b2312f92c5454f566" exitCode=0 Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.698890 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerDied","Data":"f6f02c6c5daa83a2bc9a244e2f2d29ce5d2eecc5852e5b7b2312f92c5454f566"} Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.708874 4764 generic.go:334] "Generic (PLEG): container finished" podID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerID="66db63deb4cf08666efd669bb6549053c928ca8b295fa8ffcf41c08f09830091" exitCode=0 Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.708967 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerDied","Data":"66db63deb4cf08666efd669bb6549053c928ca8b295fa8ffcf41c08f09830091"} Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.712198 4764 generic.go:334] "Generic (PLEG): container finished" podID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerID="70dd79d19231a2c2805407713e3787c15f1480c5049897dfb3b4b1ac100f398e" exitCode=0 Jan 27 07:36:00 crc kubenswrapper[4764]: I0127 07:36:00.712286 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerDied","Data":"70dd79d19231a2c2805407713e3787c15f1480c5049897dfb3b4b1ac100f398e"} Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.601658 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728592 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728646 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728718 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728755 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728859 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.728918 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.729072 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.729112 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzbp5\" (UniqueName: \"kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.729226 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle\") pod \"02cf5030-57ab-4086-9849-4607dc4e91b8\" (UID: \"02cf5030-57ab-4086-9849-4607dc4e91b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.729604 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.729852 4764 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02cf5030-57ab-4086-9849-4607dc4e91b8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.731083 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs" (OuterVolumeSpecName: "logs") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.736200 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.747697 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts" (OuterVolumeSpecName: "scripts") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.757238 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5" (OuterVolumeSpecName: "kube-api-access-dzbp5") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "kube-api-access-dzbp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.766043 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02cf5030-57ab-4086-9849-4607dc4e91b8","Type":"ContainerDied","Data":"9802e891c19c0c8ac861ddf80c0ee137a45deb53acaa3d3dbbcb516bf5101387"} Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.766094 4764 scope.go:117] "RemoveContainer" containerID="e68d3268761a71d086c0cff42a36b2a584e2de73402b81238f526ef6313396d0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.766199 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.768898 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r44mw" event={"ID":"cd369b40-e130-41c4-bc59-216bb4e60d7c","Type":"ContainerStarted","Data":"3cb75292fb0c65e7de0017662858f0ff4945a05a42368411e5cb3fbadbf946cb"} Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.797343 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-r44mw" podStartSLOduration=1.699803757 podStartE2EDuration="9.797322498s" podCreationTimestamp="2026-01-27 07:35:55 +0000 UTC" firstStartedPulling="2026-01-27 07:35:56.29156263 +0000 UTC m=+1168.887185156" lastFinishedPulling="2026-01-27 07:36:04.389081371 +0000 UTC m=+1176.984703897" observedRunningTime="2026-01-27 07:36:04.788563789 +0000 UTC m=+1177.384186325" watchObservedRunningTime="2026-01-27 07:36:04.797322498 +0000 UTC m=+1177.392945034" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.803105 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.814946 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.815504 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.824998 4764 scope.go:117] "RemoveContainer" containerID="c752d046bfbebaae6419ccddb3420b63b0330becaa143dc5a0f8fe4fe56adf8c" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.831410 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.831453 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cf5030-57ab-4086-9849-4607dc4e91b8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.831463 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzbp5\" (UniqueName: \"kubernetes.io/projected/02cf5030-57ab-4086-9849-4607dc4e91b8-kube-api-access-dzbp5\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.831474 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.831484 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.835645 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.849381 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.858585 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.862739 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data" (OuterVolumeSpecName: "config-data") pod "02cf5030-57ab-4086-9849-4607dc4e91b8" (UID: "02cf5030-57ab-4086-9849-4607dc4e91b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933064 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933110 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933135 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933193 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933210 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933231 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933250 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bxns\" (UniqueName: \"kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933266 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933284 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933304 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933339 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933353 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933455 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933478 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933522 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpxq5\" (UniqueName: \"kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933543 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933573 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933589 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run\") pod \"09c36c41-b1e7-4d09-830c-d879d6b9a982\" (UID: \"09c36c41-b1e7-4d09-830c-d879d6b9a982\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933604 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933621 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts\") pod \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\" (UID: \"56d9b1bd-fc6c-422d-b89c-c29172c721b8\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.933635 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data\") pod \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\" (UID: \"aecc4746-6e8e-46a5-b55f-04deb52a10ff\") " Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.934147 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.934957 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.935469 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.935483 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.935491 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02cf5030-57ab-4086-9849-4607dc4e91b8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.938115 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.938776 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.939227 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs" (OuterVolumeSpecName: "logs") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.939989 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts" (OuterVolumeSpecName: "scripts") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.942660 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.945922 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.947871 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns" (OuterVolumeSpecName: "kube-api-access-6bxns") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "kube-api-access-6bxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.948068 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.949382 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5" (OuterVolumeSpecName: "kube-api-access-rpxq5") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "kube-api-access-rpxq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.950392 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb" (OuterVolumeSpecName: "kube-api-access-6jxlb") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "kube-api-access-6jxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.959970 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts" (OuterVolumeSpecName: "scripts") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:04 crc kubenswrapper[4764]: I0127 07:36:04.968430 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts" (OuterVolumeSpecName: "scripts") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.004922 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.005408 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.037922 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038020 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038033 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/aecc4746-6e8e-46a5-b55f-04deb52a10ff-kube-api-access-6jxlb\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038049 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bxns\" (UniqueName: \"kubernetes.io/projected/56d9b1bd-fc6c-422d-b89c-c29172c721b8-kube-api-access-6bxns\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038091 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038105 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038116 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56d9b1bd-fc6c-422d-b89c-c29172c721b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038127 4764 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aecc4746-6e8e-46a5-b55f-04deb52a10ff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038137 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpxq5\" (UniqueName: \"kubernetes.io/projected/09c36c41-b1e7-4d09-830c-d879d6b9a982-kube-api-access-rpxq5\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038147 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c36c41-b1e7-4d09-830c-d879d6b9a982-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038176 4764 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.038189 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.091431 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.099213 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.102090 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.104276 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data" (OuterVolumeSpecName: "config-data") pod "09c36c41-b1e7-4d09-830c-d879d6b9a982" (UID: "09c36c41-b1e7-4d09-830c-d879d6b9a982"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.127342 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.128374 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.132028 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.139837 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdpm\" (UniqueName: \"kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.139898 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.139943 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.139970 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140051 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140078 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140150 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140209 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9jn\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140249 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140275 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140306 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140354 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140375 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140404 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140427 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140504 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts\") pod \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\" (UID: \"aeb9ae7e-2477-49e8-8690-d7fe580667fe\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140966 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.140983 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.141014 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.141026 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.141037 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.141048 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c36c41-b1e7-4d09-830c-d879d6b9a982-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.141059 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.147144 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn" (OuterVolumeSpecName: "kube-api-access-tr9jn") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "kube-api-access-tr9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.147845 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts" (OuterVolumeSpecName: "scripts") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.151159 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.155368 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.156795 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.184055 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.195088 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.195733 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.196408 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs" (OuterVolumeSpecName: "logs") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.185966 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm" (OuterVolumeSpecName: "kube-api-access-ljdpm") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "kube-api-access-ljdpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.209515 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.231395 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data" (OuterVolumeSpecName: "config-data") pod "56d9b1bd-fc6c-422d-b89c-c29172c721b8" (UID: "56d9b1bd-fc6c-422d-b89c-c29172c721b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.242942 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245219 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245246 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245284 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-server" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245292 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-server" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245307 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245314 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api-log" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245325 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245332 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245342 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245348 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245369 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245376 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245395 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="cinder-scheduler" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245402 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="cinder-scheduler" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245421 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245427 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245455 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-notification-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245467 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-notification-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245479 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="sg-core" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245487 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="sg-core" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245591 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-central-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245610 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-central-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245629 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245635 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245658 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="probe" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245664 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="probe" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.245672 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.245681 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246021 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246037 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246091 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-notification-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246106 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="probe" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246126 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246149 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-log" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246156 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246170 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="proxy-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246189 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="sg-core" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246206 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" containerName="ceilometer-central-agent" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246213 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" containerName="cinder-scheduler" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246223 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246231 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" containerName="glance-httpd" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.246247 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" containerName="proxy-server" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247183 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9jn\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-kube-api-access-tr9jn\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247209 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247223 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247232 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb9ae7e-2477-49e8-8690-d7fe580667fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247245 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247257 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d9b1bd-fc6c-422d-b89c-c29172c721b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247272 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdpm\" (UniqueName: \"kubernetes.io/projected/aeb9ae7e-2477-49e8-8690-d7fe580667fe-kube-api-access-ljdpm\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247283 4764 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247325 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.247338 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.249010 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.257067 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.259713 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.259830 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.260130 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.265071 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data" (OuterVolumeSpecName: "config-data") pod "aecc4746-6e8e-46a5-b55f-04deb52a10ff" (UID: "aecc4746-6e8e-46a5-b55f-04deb52a10ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.286529 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.305896 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.311631 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.325144 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.337947 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349059 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349123 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data-custom\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349185 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-scripts\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349208 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509e5ab7-b8a4-43a3-8622-e76d16374941-logs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349231 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349259 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsznb\" (UniqueName: \"kubernetes.io/projected/509e5ab7-b8a4-43a3-8622-e76d16374941-kube-api-access-fsznb\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349281 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349303 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349327 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/509e5ab7-b8a4-43a3-8622-e76d16374941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349390 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349401 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc4746-6e8e-46a5-b55f-04deb52a10ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349410 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349419 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349429 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.349456 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: E0127 07:36:05.353773 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs podName:c21cdf5a-3da2-4f0c-bdd9-92e6c341da10 nodeName:}" failed. No retries permitted until 2026-01-27 07:36:05.853748914 +0000 UTC m=+1178.449371440 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10") : error deleting /var/lib/kubelet/pods/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10/volume-subpaths: remove /var/lib/kubelet/pods/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10/volume-subpaths: no such file or directory Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.356768 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data" (OuterVolumeSpecName: "config-data") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.365152 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data" (OuterVolumeSpecName: "config-data") pod "aeb9ae7e-2477-49e8-8690-d7fe580667fe" (UID: "aeb9ae7e-2477-49e8-8690-d7fe580667fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454301 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-scripts\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454344 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509e5ab7-b8a4-43a3-8622-e76d16374941-logs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454368 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454403 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsznb\" (UniqueName: \"kubernetes.io/projected/509e5ab7-b8a4-43a3-8622-e76d16374941-kube-api-access-fsznb\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454423 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454460 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454484 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/509e5ab7-b8a4-43a3-8622-e76d16374941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454531 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454555 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data-custom\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454621 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb9ae7e-2477-49e8-8690-d7fe580667fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.454631 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.455231 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509e5ab7-b8a4-43a3-8622-e76d16374941-logs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.455542 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/509e5ab7-b8a4-43a3-8622-e76d16374941-etc-machine-id\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.474090 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-scripts\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.474709 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.475269 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.479381 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.479827 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-config-data-custom\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.479888 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/509e5ab7-b8a4-43a3-8622-e76d16374941-public-tls-certs\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.498102 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsznb\" (UniqueName: \"kubernetes.io/projected/509e5ab7-b8a4-43a3-8622-e76d16374941-kube-api-access-fsznb\") pod \"cinder-api-0\" (UID: \"509e5ab7-b8a4-43a3-8622-e76d16374941\") " pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.734224 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.783796 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c36c41-b1e7-4d09-830c-d879d6b9a982","Type":"ContainerDied","Data":"2ab0cc3ea79ed64ff9bb293ec9e6bcd0fdb719ae520a3616b641481c2228fc7c"} Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.783921 4764 scope.go:117] "RemoveContainer" containerID="026ef1441aa8e1fc645c5f172ab78e733eb721075991ef3a98cdf2b37bc5a14e" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.784198 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.808222 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aeb9ae7e-2477-49e8-8690-d7fe580667fe","Type":"ContainerDied","Data":"57a68ad665ea330f898508756f994b31f45a33bacbcd599532c19305aaedd487"} Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.808344 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.821613 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56d9b1bd-fc6c-422d-b89c-c29172c721b8","Type":"ContainerDied","Data":"452aad7520af5ffc7b0b4db05cf0d43a003a9a99a8ef5aee57a9d7139c7ae433"} Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.821731 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.839409 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7875b648f9-hbq8d" event={"ID":"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10","Type":"ContainerDied","Data":"3ff507b5bfcf056cb182c37d214ab857562c38a558f1b381a4660db2a80ed73b"} Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.839713 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7875b648f9-hbq8d" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.841712 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aecc4746-6e8e-46a5-b55f-04deb52a10ff","Type":"ContainerDied","Data":"07ed2908e8a51a25f1230e03b2dc1c8606fe330c4ba6d62a6c0cf255d4098d0a"} Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.841875 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.844097 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.856239 4764 scope.go:117] "RemoveContainer" containerID="a7e4aebd4b980a684303610ef13089bf908063cf368e7fa7ae3bd51f375804b6" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.864613 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") pod \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\" (UID: \"c21cdf5a-3da2-4f0c-bdd9-92e6c341da10\") " Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.878140 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" (UID: "c21cdf5a-3da2-4f0c-bdd9-92e6c341da10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.882846 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.895000 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.904492 4764 scope.go:117] "RemoveContainer" containerID="60859cb06a8049a0cbb7c321949a0e33a989903faa5abd9ba419f94362da2e30" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.912560 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.928223 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.930032 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.934666 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.944719 4764 scope.go:117] "RemoveContainer" containerID="98b15996955f0a4b084d8985ca3abf37d536aaa4b65951b48d6389d80e0a2136" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.942066 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.937897 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gr88t" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.937974 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.939243 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.948489 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.951039 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.951222 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.958092 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.967748 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.967856 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjs8\" (UniqueName: \"kubernetes.io/projected/39745950-025a-428e-bd01-03d3d0d5050b-kube-api-access-qbjs8\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.967891 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-logs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.967926 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.968060 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.968111 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.968128 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.968185 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.968233 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.971013 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:36:05 crc kubenswrapper[4764]: I0127 07:36:05.987685 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.000124 4764 scope.go:117] "RemoveContainer" containerID="9d040d5b3b7b562d924869e3eeee2c1ecb2d5b661255e4374e1785b94519fcd6" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.005523 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.017555 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.022810 4764 scope.go:117] "RemoveContainer" containerID="2ab38bb2b6166de89e09888e133f4b6735b6676edcb75d9878b26d272825b7c6" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.027003 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.035459 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.037806 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.039759 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.041027 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.047287 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.051182 4764 scope.go:117] "RemoveContainer" containerID="7a63c6f08b0a17d06645cb7cf22ea4a2b888d2d6a9e477cbf1e6be4a2f580873" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.057821 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.059469 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.061786 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.071100 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.071288 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.071400 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjs8\" (UniqueName: \"kubernetes.io/projected/39745950-025a-428e-bd01-03d3d0d5050b-kube-api-access-qbjs8\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.071873 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.074699 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.074927 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-logs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075032 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075108 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9zr\" (UniqueName: \"kubernetes.io/projected/1e9a8eb4-9ded-40d9-91c2-824abdc80016-kube-api-access-9t9zr\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075255 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075335 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvj4\" (UniqueName: \"kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075481 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075582 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075696 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075802 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.075934 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076099 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076259 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076407 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076511 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076724 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.076922 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.077165 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-logs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.077361 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.078082 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.078579 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-logs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.079984 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.081576 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.082329 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39745950-025a-428e-bd01-03d3d0d5050b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.085275 4764 scope.go:117] "RemoveContainer" containerID="f6f02c6c5daa83a2bc9a244e2f2d29ce5d2eecc5852e5b7b2312f92c5454f566" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.091968 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.113367 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.114080 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39745950-025a-428e-bd01-03d3d0d5050b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.115338 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjs8\" (UniqueName: \"kubernetes.io/projected/39745950-025a-428e-bd01-03d3d0d5050b-kube-api-access-qbjs8\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.124846 4764 scope.go:117] "RemoveContainer" containerID="66db63deb4cf08666efd669bb6549053c928ca8b295fa8ffcf41c08f09830091" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.149970 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"39745950-025a-428e-bd01-03d3d0d5050b\") " pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.164894 4764 scope.go:117] "RemoveContainer" containerID="e6f16aea35dced08a8d9b41d8bbd64099b4f1fe033927a503a15bd5a7c9f562e" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179508 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179617 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179653 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179685 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179714 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179742 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179771 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179797 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179830 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179857 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179886 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179915 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-logs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179943 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.179971 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/311b6d25-72b1-40db-911d-78426da15c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180024 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180050 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180089 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9zr\" (UniqueName: \"kubernetes.io/projected/1e9a8eb4-9ded-40d9-91c2-824abdc80016-kube-api-access-9t9zr\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180125 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180171 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvbg\" (UniqueName: \"kubernetes.io/projected/311b6d25-72b1-40db-911d-78426da15c6b-kube-api-access-sdvbg\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180200 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.180259 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvj4\" (UniqueName: \"kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.182856 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-logs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.183201 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.183428 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.198313 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.200338 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.200688 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.201659 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.202073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8eb4-9ded-40d9-91c2-824abdc80016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.202404 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.202876 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.210573 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.215052 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e9a8eb4-9ded-40d9-91c2-824abdc80016-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.220954 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.230654 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvj4\" (UniqueName: \"kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4\") pod \"ceilometer-0\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.239731 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9zr\" (UniqueName: \"kubernetes.io/projected/1e9a8eb4-9ded-40d9-91c2-824abdc80016-kube-api-access-9t9zr\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.240168 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e9a8eb4-9ded-40d9-91c2-824abdc80016\") " pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.282402 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.283933 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/311b6d25-72b1-40db-911d-78426da15c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284000 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284032 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdvbg\" (UniqueName: \"kubernetes.io/projected/311b6d25-72b1-40db-911d-78426da15c6b-kube-api-access-sdvbg\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284037 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/311b6d25-72b1-40db-911d-78426da15c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284063 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.284837 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.288157 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.288348 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.288468 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.288281 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311b6d25-72b1-40db-911d-78426da15c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.297078 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.299947 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdvbg\" (UniqueName: \"kubernetes.io/projected/311b6d25-72b1-40db-911d-78426da15c6b-kube-api-access-sdvbg\") pod \"cinder-scheduler-0\" (UID: \"311b6d25-72b1-40db-911d-78426da15c6b\") " pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.300269 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.361845 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.382109 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.423358 4764 scope.go:117] "RemoveContainer" containerID="e05443e395b1729ee946218afc1586e2e6404fa803cbcc4bb93a53c1162aec11" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.498144 4764 scope.go:117] "RemoveContainer" containerID="70dd79d19231a2c2805407713e3787c15f1480c5049897dfb3b4b1ac100f398e" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.510887 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" path="/var/lib/kubelet/pods/02cf5030-57ab-4086-9849-4607dc4e91b8/volumes" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.511911 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c36c41-b1e7-4d09-830c-d879d6b9a982" path="/var/lib/kubelet/pods/09c36c41-b1e7-4d09-830c-d879d6b9a982/volumes" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.512573 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9b1bd-fc6c-422d-b89c-c29172c721b8" path="/var/lib/kubelet/pods/56d9b1bd-fc6c-422d-b89c-c29172c721b8/volumes" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.514037 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb9ae7e-2477-49e8-8690-d7fe580667fe" path="/var/lib/kubelet/pods/aeb9ae7e-2477-49e8-8690-d7fe580667fe/volumes" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.519212 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecc4746-6e8e-46a5-b55f-04deb52a10ff" path="/var/lib/kubelet/pods/aecc4746-6e8e-46a5-b55f-04deb52a10ff/volumes" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.521298 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68dd46b99f-78lf2" Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.521320 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.521338 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-7875b648f9-hbq8d"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.600604 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.600865 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc7fb5f4f-2d6z5" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-api" containerID="cri-o://4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69" gracePeriod=30 Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.600941 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc7fb5f4f-2d6z5" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-httpd" containerID="cri-o://ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e" gracePeriod=30 Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.865744 4764 generic.go:334] "Generic (PLEG): container finished" podID="50def599-3481-4d79-9c71-5fb10f6500ab" containerID="ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e" exitCode=0 Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.865801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerDied","Data":"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e"} Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.906369 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"509e5ab7-b8a4-43a3-8622-e76d16374941","Type":"ContainerStarted","Data":"52da87a0b1db659610a2143fd14e04d401d32e905cb81ea244637f11e43b7dcd"} Jan 27 07:36:06 crc kubenswrapper[4764]: I0127 07:36:06.937888 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 07:36:07 crc kubenswrapper[4764]: I0127 07:36:07.049027 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:07 crc kubenswrapper[4764]: I0127 07:36:07.069715 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 07:36:07 crc kubenswrapper[4764]: W0127 07:36:07.080857 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d103ed0_456f_4890_8370_d0185b7af9b3.slice/crio-f3ab7b9dd39164cad3b0a9443df1e9c3217712d9170dc57c9b42b3fb28ee4ce0 WatchSource:0}: Error finding container f3ab7b9dd39164cad3b0a9443df1e9c3217712d9170dc57c9b42b3fb28ee4ce0: Status 404 returned error can't find the container with id f3ab7b9dd39164cad3b0a9443df1e9c3217712d9170dc57c9b42b3fb28ee4ce0 Jan 27 07:36:07 crc kubenswrapper[4764]: W0127 07:36:07.087223 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311b6d25_72b1_40db_911d_78426da15c6b.slice/crio-8d75bc1fd0421c71e37ce985ec7e857dfee33f3334b9d23dad6df966e4f50861 WatchSource:0}: Error finding container 8d75bc1fd0421c71e37ce985ec7e857dfee33f3334b9d23dad6df966e4f50861: Status 404 returned error can't find the container with id 8d75bc1fd0421c71e37ce985ec7e857dfee33f3334b9d23dad6df966e4f50861 Jan 27 07:36:07 crc kubenswrapper[4764]: I0127 07:36:07.938571 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 07:36:07 crc kubenswrapper[4764]: I0127 07:36:07.975111 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"311b6d25-72b1-40db-911d-78426da15c6b","Type":"ContainerStarted","Data":"e26f831105770b4a0a2edd6135dcbfa50e282feab823f48634645fb16000d5ab"} Jan 27 07:36:07 crc kubenswrapper[4764]: I0127 07:36:07.975157 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"311b6d25-72b1-40db-911d-78426da15c6b","Type":"ContainerStarted","Data":"8d75bc1fd0421c71e37ce985ec7e857dfee33f3334b9d23dad6df966e4f50861"} Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.026612 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39745950-025a-428e-bd01-03d3d0d5050b","Type":"ContainerStarted","Data":"92e4775ba3d52a150138a579bdf260244cec244445ffa32a60035ffec6f0a205"} Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.026658 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39745950-025a-428e-bd01-03d3d0d5050b","Type":"ContainerStarted","Data":"eb6800417e268e21300e19a1ca5813d63353017b13191059453566375514c68c"} Jan 27 07:36:08 crc kubenswrapper[4764]: W0127 07:36:08.026722 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e9a8eb4_9ded_40d9_91c2_824abdc80016.slice/crio-da99c4dac18d64df1756fecbcee4c007c71baf88e5c4351b55da71b710f301e9 WatchSource:0}: Error finding container da99c4dac18d64df1756fecbcee4c007c71baf88e5c4351b55da71b710f301e9: Status 404 returned error can't find the container with id da99c4dac18d64df1756fecbcee4c007c71baf88e5c4351b55da71b710f301e9 Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.057757 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"509e5ab7-b8a4-43a3-8622-e76d16374941","Type":"ContainerStarted","Data":"1ab1f7048605ea9dad9c8e07afc9c3efc2d8fe0c7c25476526983913ef8bb89d"} Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.062663 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerStarted","Data":"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423"} Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.062717 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerStarted","Data":"f3ab7b9dd39164cad3b0a9443df1e9c3217712d9170dc57c9b42b3fb28ee4ce0"} Jan 27 07:36:08 crc kubenswrapper[4764]: I0127 07:36:08.465373 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21cdf5a-3da2-4f0c-bdd9-92e6c341da10" path="/var/lib/kubelet/pods/c21cdf5a-3da2-4f0c-bdd9-92e6c341da10/volumes" Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.081861 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e9a8eb4-9ded-40d9-91c2-824abdc80016","Type":"ContainerStarted","Data":"fb8818f879cb497af187fd14a7175b38fc6345c2e3abd5805d661a098d6cabcd"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.082325 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e9a8eb4-9ded-40d9-91c2-824abdc80016","Type":"ContainerStarted","Data":"da99c4dac18d64df1756fecbcee4c007c71baf88e5c4351b55da71b710f301e9"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.087846 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"311b6d25-72b1-40db-911d-78426da15c6b","Type":"ContainerStarted","Data":"f8a2b300bc8380a4b3853705cc085b82c9784bfe56612324a4e7e970b733e3ad"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.090284 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39745950-025a-428e-bd01-03d3d0d5050b","Type":"ContainerStarted","Data":"5e244199b47016bf673f8338fce3345c4a680fb15c7fb2548ba62e38fdb4a0e2"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.092226 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"509e5ab7-b8a4-43a3-8622-e76d16374941","Type":"ContainerStarted","Data":"7d6efbe9b5ca1b808b92d6fa76fe7101301518c7d96f8f3f4982a5355626eb72"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.092856 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.094397 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerStarted","Data":"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b"} Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.114605 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.114585192 podStartE2EDuration="4.114585192s" podCreationTimestamp="2026-01-27 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:09.111963394 +0000 UTC m=+1181.707585920" watchObservedRunningTime="2026-01-27 07:36:09.114585192 +0000 UTC m=+1181.710207718" Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.146579 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.146562729 podStartE2EDuration="4.146562729s" podCreationTimestamp="2026-01-27 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:09.140063549 +0000 UTC m=+1181.735686075" watchObservedRunningTime="2026-01-27 07:36:09.146562729 +0000 UTC m=+1181.742185255" Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.170781 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.170757023 podStartE2EDuration="4.170757023s" podCreationTimestamp="2026-01-27 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:09.169789877 +0000 UTC m=+1181.765412403" watchObservedRunningTime="2026-01-27 07:36:09.170757023 +0000 UTC m=+1181.766379549" Jan 27 07:36:09 crc kubenswrapper[4764]: I0127 07:36:09.522628 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="02cf5030-57ab-4086-9849-4607dc4e91b8" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:36:10 crc kubenswrapper[4764]: I0127 07:36:10.107266 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e9a8eb4-9ded-40d9-91c2-824abdc80016","Type":"ContainerStarted","Data":"2774ebbda0c75370d8736886e4bd6b6151df453accd6209333b0f11cdd36e99c"} Jan 27 07:36:10 crc kubenswrapper[4764]: I0127 07:36:10.110276 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerStarted","Data":"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74"} Jan 27 07:36:10 crc kubenswrapper[4764]: I0127 07:36:10.137452 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.137424408 podStartE2EDuration="5.137424408s" podCreationTimestamp="2026-01-27 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:10.134655386 +0000 UTC m=+1182.730277912" watchObservedRunningTime="2026-01-27 07:36:10.137424408 +0000 UTC m=+1182.733046934" Jan 27 07:36:11 crc kubenswrapper[4764]: I0127 07:36:11.382951 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 07:36:12 crc kubenswrapper[4764]: I0127 07:36:12.129901 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerStarted","Data":"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087"} Jan 27 07:36:12 crc kubenswrapper[4764]: I0127 07:36:12.130358 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:36:12 crc kubenswrapper[4764]: I0127 07:36:12.154221 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.448850375 podStartE2EDuration="7.154203572s" podCreationTimestamp="2026-01-27 07:36:05 +0000 UTC" firstStartedPulling="2026-01-27 07:36:07.083943256 +0000 UTC m=+1179.679565792" lastFinishedPulling="2026-01-27 07:36:10.789296463 +0000 UTC m=+1183.384918989" observedRunningTime="2026-01-27 07:36:12.151719947 +0000 UTC m=+1184.747342493" watchObservedRunningTime="2026-01-27 07:36:12.154203572 +0000 UTC m=+1184.749826098" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.099463 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.171691 4764 generic.go:334] "Generic (PLEG): container finished" podID="50def599-3481-4d79-9c71-5fb10f6500ab" containerID="4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69" exitCode=0 Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.171737 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerDied","Data":"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69"} Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.171769 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc7fb5f4f-2d6z5" event={"ID":"50def599-3481-4d79-9c71-5fb10f6500ab","Type":"ContainerDied","Data":"3266ae2646f5bf4f0c531b68fe9a497a59e6d95f730f12260eb6b5443ac410f9"} Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.171785 4764 scope.go:117] "RemoveContainer" containerID="ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.171740 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc7fb5f4f-2d6z5" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.215918 4764 scope.go:117] "RemoveContainer" containerID="4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.245043 4764 scope.go:117] "RemoveContainer" containerID="ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e" Jan 27 07:36:16 crc kubenswrapper[4764]: E0127 07:36:16.245780 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e\": container with ID starting with ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e not found: ID does not exist" containerID="ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.245832 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e"} err="failed to get container status \"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e\": rpc error: code = NotFound desc = could not find container \"ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e\": container with ID starting with ebabb37148a7ea9260c05c07e05e43d161fe153e65c6ca71b32ae03d7728cf5e not found: ID does not exist" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.245860 4764 scope.go:117] "RemoveContainer" containerID="4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69" Jan 27 07:36:16 crc kubenswrapper[4764]: E0127 07:36:16.246423 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69\": container with ID starting with 4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69 not found: ID does not exist" containerID="4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.246470 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69"} err="failed to get container status \"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69\": rpc error: code = NotFound desc = could not find container \"4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69\": container with ID starting with 4fdcedb4c46732c538f0bb963ea9f5698b1ddeab7d572b3b40aca07e16303f69 not found: ID does not exist" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257557 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257665 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257720 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflbv\" (UniqueName: \"kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257778 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257825 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257927 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.257979 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs\") pod \"50def599-3481-4d79-9c71-5fb10f6500ab\" (UID: \"50def599-3481-4d79-9c71-5fb10f6500ab\") " Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.265966 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv" (OuterVolumeSpecName: "kube-api-access-kflbv") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "kube-api-access-kflbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.266301 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.283424 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.283551 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.299208 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.299249 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.335016 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.335516 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.336360 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.338361 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.343106 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.350423 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config" (OuterVolumeSpecName: "config") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.351888 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.353642 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361723 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361771 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361783 4764 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361801 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kflbv\" (UniqueName: \"kubernetes.io/projected/50def599-3481-4d79-9c71-5fb10f6500ab-kube-api-access-kflbv\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361814 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.361824 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.372206 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "50def599-3481-4d79-9c71-5fb10f6500ab" (UID: "50def599-3481-4d79-9c71-5fb10f6500ab"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.465547 4764 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50def599-3481-4d79-9c71-5fb10f6500ab-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.497562 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.505963 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cc7fb5f4f-2d6z5"] Jan 27 07:36:16 crc kubenswrapper[4764]: I0127 07:36:16.596761 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 07:36:17 crc kubenswrapper[4764]: I0127 07:36:17.180106 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:36:17 crc kubenswrapper[4764]: I0127 07:36:17.180145 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 07:36:17 crc kubenswrapper[4764]: I0127 07:36:17.180155 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:17 crc kubenswrapper[4764]: I0127 07:36:17.180164 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:18 crc kubenswrapper[4764]: I0127 07:36:18.451107 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" path="/var/lib/kubelet/pods/50def599-3481-4d79-9c71-5fb10f6500ab/volumes" Jan 27 07:36:18 crc kubenswrapper[4764]: I0127 07:36:18.611488 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 07:36:19 crc kubenswrapper[4764]: I0127 07:36:19.554329 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:36:19 crc kubenswrapper[4764]: I0127 07:36:19.554465 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:36:19 crc kubenswrapper[4764]: I0127 07:36:19.556570 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 07:36:19 crc kubenswrapper[4764]: I0127 07:36:19.921651 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:36:19 crc kubenswrapper[4764]: I0127 07:36:19.959249 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5547d9cbf4-x8lh6" Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.037128 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.037398 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fbd5ff6f8-6tmz6" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-log" containerID="cri-o://0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083" gracePeriod=30 Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.037901 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fbd5ff6f8-6tmz6" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-api" containerID="cri-o://b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e" gracePeriod=30 Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.235045 4764 generic.go:334] "Generic (PLEG): container finished" podID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerID="0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083" exitCode=143 Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.236342 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerDied","Data":"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083"} Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.399210 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.399328 4764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:36:20 crc kubenswrapper[4764]: I0127 07:36:20.403951 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 07:36:21 crc kubenswrapper[4764]: I0127 07:36:21.245301 4764 generic.go:334] "Generic (PLEG): container finished" podID="cd369b40-e130-41c4-bc59-216bb4e60d7c" containerID="3cb75292fb0c65e7de0017662858f0ff4945a05a42368411e5cb3fbadbf946cb" exitCode=0 Jan 27 07:36:21 crc kubenswrapper[4764]: I0127 07:36:21.245386 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r44mw" event={"ID":"cd369b40-e130-41c4-bc59-216bb4e60d7c","Type":"ContainerDied","Data":"3cb75292fb0c65e7de0017662858f0ff4945a05a42368411e5cb3fbadbf946cb"} Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.618014 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.692204 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle\") pod \"cd369b40-e130-41c4-bc59-216bb4e60d7c\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.692286 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data\") pod \"cd369b40-e130-41c4-bc59-216bb4e60d7c\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.692329 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtfs\" (UniqueName: \"kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs\") pod \"cd369b40-e130-41c4-bc59-216bb4e60d7c\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.692385 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts\") pod \"cd369b40-e130-41c4-bc59-216bb4e60d7c\" (UID: \"cd369b40-e130-41c4-bc59-216bb4e60d7c\") " Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.702305 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs" (OuterVolumeSpecName: "kube-api-access-vwtfs") pod "cd369b40-e130-41c4-bc59-216bb4e60d7c" (UID: "cd369b40-e130-41c4-bc59-216bb4e60d7c"). InnerVolumeSpecName "kube-api-access-vwtfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.702306 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts" (OuterVolumeSpecName: "scripts") pod "cd369b40-e130-41c4-bc59-216bb4e60d7c" (UID: "cd369b40-e130-41c4-bc59-216bb4e60d7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.723271 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data" (OuterVolumeSpecName: "config-data") pod "cd369b40-e130-41c4-bc59-216bb4e60d7c" (UID: "cd369b40-e130-41c4-bc59-216bb4e60d7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.726083 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd369b40-e130-41c4-bc59-216bb4e60d7c" (UID: "cd369b40-e130-41c4-bc59-216bb4e60d7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.795933 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.795987 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwtfs\" (UniqueName: \"kubernetes.io/projected/cd369b40-e130-41c4-bc59-216bb4e60d7c-kube-api-access-vwtfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.795999 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:22 crc kubenswrapper[4764]: I0127 07:36:22.796008 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd369b40-e130-41c4-bc59-216bb4e60d7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.260863 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r44mw" event={"ID":"cd369b40-e130-41c4-bc59-216bb4e60d7c","Type":"ContainerDied","Data":"a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062"} Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.260896 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a30acfdb8c7557dd0f99175f85b6a3610c9b3e96ea6ed90c057ee89c2dedb062" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.260952 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r44mw" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.345408 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:36:23 crc kubenswrapper[4764]: E0127 07:36:23.346015 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-api" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346037 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-api" Jan 27 07:36:23 crc kubenswrapper[4764]: E0127 07:36:23.346053 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd369b40-e130-41c4-bc59-216bb4e60d7c" containerName="nova-cell0-conductor-db-sync" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346061 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd369b40-e130-41c4-bc59-216bb4e60d7c" containerName="nova-cell0-conductor-db-sync" Jan 27 07:36:23 crc kubenswrapper[4764]: E0127 07:36:23.346080 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-httpd" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346086 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-httpd" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346246 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-api" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346259 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd369b40-e130-41c4-bc59-216bb4e60d7c" containerName="nova-cell0-conductor-db-sync" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346277 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="50def599-3481-4d79-9c71-5fb10f6500ab" containerName="neutron-httpd" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.346832 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.349662 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x9p6r" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.349893 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.371890 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.406856 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75scq\" (UniqueName: \"kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.406912 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.406951 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.509151 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75scq\" (UniqueName: \"kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.509221 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.509276 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.514836 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.516777 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.531315 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75scq\" (UniqueName: \"kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq\") pod \"nova-cell0-conductor-0\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.702817 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.721793 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.762359 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.762411 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815061 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmk9\" (UniqueName: \"kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815099 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815121 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815157 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815177 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815230 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.815313 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs\") pod \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\" (UID: \"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25\") " Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.817086 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs" (OuterVolumeSpecName: "logs") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.820225 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts" (OuterVolumeSpecName: "scripts") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.822259 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9" (OuterVolumeSpecName: "kube-api-access-dcmk9") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "kube-api-access-dcmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.879127 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.879752 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data" (OuterVolumeSpecName: "config-data") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.917029 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.917050 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.917060 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmk9\" (UniqueName: \"kubernetes.io/projected/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-kube-api-access-dcmk9\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.917071 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.917079 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.942654 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:23 crc kubenswrapper[4764]: I0127 07:36:23.987828 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" (UID: "3c441fce-1c1f-4e6c-8fbd-12ef92f35f25"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.018522 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.018550 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.203974 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:36:24 crc kubenswrapper[4764]: W0127 07:36:24.211965 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32c7f10b_0a05_432c_9c2e_b53bcc358a0f.slice/crio-19c7bc2d1aa157cc4a00dc4c3f9dce82f1215cc31637243eb15cab494a2992d6 WatchSource:0}: Error finding container 19c7bc2d1aa157cc4a00dc4c3f9dce82f1215cc31637243eb15cab494a2992d6: Status 404 returned error can't find the container with id 19c7bc2d1aa157cc4a00dc4c3f9dce82f1215cc31637243eb15cab494a2992d6 Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.286532 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"32c7f10b-0a05-432c-9c2e-b53bcc358a0f","Type":"ContainerStarted","Data":"19c7bc2d1aa157cc4a00dc4c3f9dce82f1215cc31637243eb15cab494a2992d6"} Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.296458 4764 generic.go:334] "Generic (PLEG): container finished" podID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerID="b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e" exitCode=0 Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.296603 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fbd5ff6f8-6tmz6" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.296928 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerDied","Data":"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e"} Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.297099 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fbd5ff6f8-6tmz6" event={"ID":"3c441fce-1c1f-4e6c-8fbd-12ef92f35f25","Type":"ContainerDied","Data":"69d09b50c221471e14c80d77fc06f0d6948f0b481b83891992d914c45215b980"} Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.297198 4764 scope.go:117] "RemoveContainer" containerID="b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.336915 4764 scope.go:117] "RemoveContainer" containerID="0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.373414 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.376725 4764 scope.go:117] "RemoveContainer" containerID="b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e" Jan 27 07:36:24 crc kubenswrapper[4764]: E0127 07:36:24.377307 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e\": container with ID starting with b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e not found: ID does not exist" containerID="b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.377376 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e"} err="failed to get container status \"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e\": rpc error: code = NotFound desc = could not find container \"b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e\": container with ID starting with b42f7c807070293c8b5947ff3cda16fdd377758f7dc436dd4a513040660bac1e not found: ID does not exist" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.377420 4764 scope.go:117] "RemoveContainer" containerID="0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083" Jan 27 07:36:24 crc kubenswrapper[4764]: E0127 07:36:24.378086 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083\": container with ID starting with 0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083 not found: ID does not exist" containerID="0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.378127 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083"} err="failed to get container status \"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083\": rpc error: code = NotFound desc = could not find container \"0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083\": container with ID starting with 0746b5dab4590c74e795084f1ab3ea87d53ebb40d11a16bdd2f2674094f2c083 not found: ID does not exist" Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.385787 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7fbd5ff6f8-6tmz6"] Jan 27 07:36:24 crc kubenswrapper[4764]: I0127 07:36:24.456820 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" path="/var/lib/kubelet/pods/3c441fce-1c1f-4e6c-8fbd-12ef92f35f25/volumes" Jan 27 07:36:25 crc kubenswrapper[4764]: I0127 07:36:25.311762 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"32c7f10b-0a05-432c-9c2e-b53bcc358a0f","Type":"ContainerStarted","Data":"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5"} Jan 27 07:36:25 crc kubenswrapper[4764]: I0127 07:36:25.311916 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:25 crc kubenswrapper[4764]: I0127 07:36:25.330708 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.330687349 podStartE2EDuration="2.330687349s" podCreationTimestamp="2026-01-27 07:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:25.329002225 +0000 UTC m=+1197.924624751" watchObservedRunningTime="2026-01-27 07:36:25.330687349 +0000 UTC m=+1197.926309885" Jan 27 07:36:33 crc kubenswrapper[4764]: I0127 07:36:33.748434 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.198280 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pkx7s"] Jan 27 07:36:34 crc kubenswrapper[4764]: E0127 07:36:34.198817 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-log" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.198843 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-log" Jan 27 07:36:34 crc kubenswrapper[4764]: E0127 07:36:34.198861 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-api" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.198871 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-api" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.199100 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-api" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.199141 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c441fce-1c1f-4e6c-8fbd-12ef92f35f25" containerName="placement-log" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.199899 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.214977 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.215213 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.238801 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pkx7s"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.320990 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhck\" (UniqueName: \"kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.321069 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.321114 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.321255 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.355269 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.357246 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.365063 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.380845 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.423401 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb77k\" (UniqueName: \"kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.423741 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.423888 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.424073 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.424214 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znhck\" (UniqueName: \"kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.424381 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.424512 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.425332 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.432829 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.445458 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znhck\" (UniqueName: \"kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.446943 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.476255 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts\") pod \"nova-cell0-cell-mapping-pkx7s\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.486996 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.496299 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.502524 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.528135 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.528219 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb77k\" (UniqueName: \"kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.528258 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.528301 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.530607 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.532764 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.534361 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.541578 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.571840 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.589770 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb77k\" (UniqueName: \"kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k\") pod \"nova-api-0\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.630460 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.630574 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7n7\" (UniqueName: \"kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.630605 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.643603 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.644925 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.646949 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.657351 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.685606 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.687723 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.691472 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.695291 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.702681 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732445 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmnjl\" (UniqueName: \"kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732481 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732566 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732589 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732620 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732655 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7n7\" (UniqueName: \"kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732774 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732835 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732858 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.732944 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgs9\" (UniqueName: \"kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.739828 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.740800 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.784809 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7n7\" (UniqueName: \"kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7\") pod \"nova-scheduler-0\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.788832 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.791045 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.830090 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.835887 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.835934 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwg4\" (UniqueName: \"kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.835960 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836026 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836256 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836322 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836416 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836479 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prgs9\" (UniqueName: \"kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836671 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmnjl\" (UniqueName: \"kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836728 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836794 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836850 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.836874 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.837264 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.840909 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.841054 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.841685 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.851914 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmnjl\" (UniqueName: \"kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.852090 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prgs9\" (UniqueName: \"kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.854611 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data\") pod \"nova-metadata-0\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " pod="openstack/nova-metadata-0" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938683 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938735 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwg4\" (UniqueName: \"kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938755 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938803 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.938836 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.939578 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.939901 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.940135 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.940190 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.940293 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:34 crc kubenswrapper[4764]: I0127 07:36:34.961830 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwg4\" (UniqueName: \"kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4\") pod \"dnsmasq-dns-557bbc7df7-hjxvr\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.013996 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.035465 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.109134 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.121258 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.147361 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pkx7s"] Jan 27 07:36:35 crc kubenswrapper[4764]: W0127 07:36:35.162208 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0383119_401b_43eb_8966_2edcc3a90f83.slice/crio-42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8 WatchSource:0}: Error finding container 42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8: Status 404 returned error can't find the container with id 42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8 Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.302222 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.435389 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pkx7s" event={"ID":"e0383119-401b-43eb-8966-2edcc3a90f83","Type":"ContainerStarted","Data":"42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8"} Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.438377 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerStarted","Data":"f4d8aa4ec6823dd987b762f1aecea87a62e10493b7e3cce7c664ed9e29fcdaae"} Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.521035 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.530481 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsxqz"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.531821 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.535114 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.535372 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.550852 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsxqz"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.599837 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.630940 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.651567 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.651664 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgxd\" (UniqueName: \"kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.651792 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.651834 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.707837 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.753793 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.753873 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgxd\" (UniqueName: \"kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.753953 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.753981 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.761196 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.762360 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.764097 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.771219 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgxd\" (UniqueName: \"kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd\") pod \"nova-cell1-conductor-db-sync-bsxqz\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:35 crc kubenswrapper[4764]: I0127 07:36:35.864741 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.138938 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsxqz"] Jan 27 07:36:36 crc kubenswrapper[4764]: W0127 07:36:36.141747 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4010c432_6801_418a_9a2f_d5b2f6798ce4.slice/crio-810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3 WatchSource:0}: Error finding container 810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3: Status 404 returned error can't find the container with id 810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3 Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.366413 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.480064 4764 generic.go:334] "Generic (PLEG): container finished" podID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerID="311f3db0743162e49c68ba9dbab05c354174f3232a8938f5eed99f817b9a2aa6" exitCode=0 Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.480405 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" event={"ID":"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9","Type":"ContainerDied","Data":"311f3db0743162e49c68ba9dbab05c354174f3232a8938f5eed99f817b9a2aa6"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.480431 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" event={"ID":"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9","Type":"ContainerStarted","Data":"793033cbba137e50095755c57b9e90a4ca20d78aa586663718886e50a4b17892"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.492326 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82df1dcc-557b-4fbd-bc4a-7a446e17f729","Type":"ContainerStarted","Data":"3b77ece36a2464d70de420c157c7f62eff4fbec6656ff51f6b3ed3e870c9371e"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.529642 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cdec377-dbf4-4c83-b613-38ae44a5d6cf","Type":"ContainerStarted","Data":"0a69f0cbe791a3ebfe4884bad8f243b0e3618d058aff67067c5cc49a1f2f4d0c"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.532761 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" event={"ID":"4010c432-6801-418a-9a2f-d5b2f6798ce4","Type":"ContainerStarted","Data":"b45a604b9224386bad3e7ed18cecde1231ed0adbcc6a706f86122141fab50eaf"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.532808 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" event={"ID":"4010c432-6801-418a-9a2f-d5b2f6798ce4","Type":"ContainerStarted","Data":"810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.554815 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pkx7s" event={"ID":"e0383119-401b-43eb-8966-2edcc3a90f83","Type":"ContainerStarted","Data":"3138b61e039dd9648b93b62155cd01eb309f57ab907e304f4bae65b3f80cbba5"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.555775 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" podStartSLOduration=1.555760023 podStartE2EDuration="1.555760023s" podCreationTimestamp="2026-01-27 07:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:36.550594048 +0000 UTC m=+1209.146216574" watchObservedRunningTime="2026-01-27 07:36:36.555760023 +0000 UTC m=+1209.151382549" Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.590644 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerStarted","Data":"c3d8730d201f6014f417451b4fffeb66a83e3223bed3e46804c00ce46cd6720b"} Jan 27 07:36:36 crc kubenswrapper[4764]: I0127 07:36:36.638691 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pkx7s" podStartSLOduration=2.638666164 podStartE2EDuration="2.638666164s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:36.569571355 +0000 UTC m=+1209.165193881" watchObservedRunningTime="2026-01-27 07:36:36.638666164 +0000 UTC m=+1209.234288690" Jan 27 07:36:37 crc kubenswrapper[4764]: I0127 07:36:37.626570 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" event={"ID":"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9","Type":"ContainerStarted","Data":"ca4f7d7ced1d99db483dd3c1cffd228644ad960bcccadc897ce8930f371695ee"} Jan 27 07:36:37 crc kubenswrapper[4764]: I0127 07:36:37.627140 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:37 crc kubenswrapper[4764]: I0127 07:36:37.669954 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" podStartSLOduration=3.66993785 podStartE2EDuration="3.66993785s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:37.664868567 +0000 UTC m=+1210.260491093" watchObservedRunningTime="2026-01-27 07:36:37.66993785 +0000 UTC m=+1210.265560376" Jan 27 07:36:38 crc kubenswrapper[4764]: I0127 07:36:38.242754 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:38 crc kubenswrapper[4764]: I0127 07:36:38.254480 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.677023 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerStarted","Data":"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128"} Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.681884 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82df1dcc-557b-4fbd-bc4a-7a446e17f729","Type":"ContainerStarted","Data":"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42"} Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.682091 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42" gracePeriod=30 Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.690562 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cdec377-dbf4-4c83-b613-38ae44a5d6cf","Type":"ContainerStarted","Data":"e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1"} Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.718146 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.007413788 podStartE2EDuration="5.718118415s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="2026-01-27 07:36:35.629326292 +0000 UTC m=+1208.224948818" lastFinishedPulling="2026-01-27 07:36:39.340030919 +0000 UTC m=+1211.935653445" observedRunningTime="2026-01-27 07:36:39.701013068 +0000 UTC m=+1212.296635594" watchObservedRunningTime="2026-01-27 07:36:39.718118415 +0000 UTC m=+1212.313740941" Jan 27 07:36:39 crc kubenswrapper[4764]: I0127 07:36:39.722062 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.917847764 podStartE2EDuration="5.722041768s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="2026-01-27 07:36:35.52418203 +0000 UTC m=+1208.119804566" lastFinishedPulling="2026-01-27 07:36:39.328376024 +0000 UTC m=+1211.923998570" observedRunningTime="2026-01-27 07:36:39.717425687 +0000 UTC m=+1212.313048213" watchObservedRunningTime="2026-01-27 07:36:39.722041768 +0000 UTC m=+1212.317664294" Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.015422 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.036455 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.705764 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerStarted","Data":"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc"} Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.705839 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-log" containerID="cri-o://0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" gracePeriod=30 Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.705982 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-metadata" containerID="cri-o://41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" gracePeriod=30 Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.709594 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerStarted","Data":"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0"} Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.709636 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerStarted","Data":"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f"} Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.766447 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.281308741 podStartE2EDuration="6.766416728s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="2026-01-27 07:36:35.317194802 +0000 UTC m=+1207.912817328" lastFinishedPulling="2026-01-27 07:36:39.802302789 +0000 UTC m=+1212.397925315" observedRunningTime="2026-01-27 07:36:40.75771429 +0000 UTC m=+1213.353336816" watchObservedRunningTime="2026-01-27 07:36:40.766416728 +0000 UTC m=+1213.362039244" Jan 27 07:36:40 crc kubenswrapper[4764]: I0127 07:36:40.766944 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.023046016 podStartE2EDuration="6.766938641s" podCreationTimestamp="2026-01-27 07:36:34 +0000 UTC" firstStartedPulling="2026-01-27 07:36:35.594967893 +0000 UTC m=+1208.190590419" lastFinishedPulling="2026-01-27 07:36:39.338860518 +0000 UTC m=+1211.934483044" observedRunningTime="2026-01-27 07:36:40.738366523 +0000 UTC m=+1213.333989049" watchObservedRunningTime="2026-01-27 07:36:40.766938641 +0000 UTC m=+1213.362561167" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.318706 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.484547 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle\") pod \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.484707 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data\") pod \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.484739 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmnjl\" (UniqueName: \"kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl\") pod \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.484819 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs\") pod \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\" (UID: \"a4b4f3fa-0837-4012-8b3e-7f299c79386a\") " Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.486097 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs" (OuterVolumeSpecName: "logs") pod "a4b4f3fa-0837-4012-8b3e-7f299c79386a" (UID: "a4b4f3fa-0837-4012-8b3e-7f299c79386a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.493603 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl" (OuterVolumeSpecName: "kube-api-access-tmnjl") pod "a4b4f3fa-0837-4012-8b3e-7f299c79386a" (UID: "a4b4f3fa-0837-4012-8b3e-7f299c79386a"). InnerVolumeSpecName "kube-api-access-tmnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.523789 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4b4f3fa-0837-4012-8b3e-7f299c79386a" (UID: "a4b4f3fa-0837-4012-8b3e-7f299c79386a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.545883 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data" (OuterVolumeSpecName: "config-data") pod "a4b4f3fa-0837-4012-8b3e-7f299c79386a" (UID: "a4b4f3fa-0837-4012-8b3e-7f299c79386a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.587040 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.587279 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmnjl\" (UniqueName: \"kubernetes.io/projected/a4b4f3fa-0837-4012-8b3e-7f299c79386a-kube-api-access-tmnjl\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.587623 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b4f3fa-0837-4012-8b3e-7f299c79386a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.587643 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b4f3fa-0837-4012-8b3e-7f299c79386a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.720328 4764 generic.go:334] "Generic (PLEG): container finished" podID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerID="41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" exitCode=0 Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.721273 4764 generic.go:334] "Generic (PLEG): container finished" podID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerID="0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" exitCode=143 Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.720482 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerDied","Data":"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc"} Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.720430 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.721379 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerDied","Data":"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128"} Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.721395 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4b4f3fa-0837-4012-8b3e-7f299c79386a","Type":"ContainerDied","Data":"c3d8730d201f6014f417451b4fffeb66a83e3223bed3e46804c00ce46cd6720b"} Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.721413 4764 scope.go:117] "RemoveContainer" containerID="41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.759918 4764 scope.go:117] "RemoveContainer" containerID="0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.760431 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.772687 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.791159 4764 scope.go:117] "RemoveContainer" containerID="41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" Jan 27 07:36:41 crc kubenswrapper[4764]: E0127 07:36:41.791758 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc\": container with ID starting with 41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc not found: ID does not exist" containerID="41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.791798 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc"} err="failed to get container status \"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc\": rpc error: code = NotFound desc = could not find container \"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc\": container with ID starting with 41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc not found: ID does not exist" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.791823 4764 scope.go:117] "RemoveContainer" containerID="0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" Jan 27 07:36:41 crc kubenswrapper[4764]: E0127 07:36:41.792131 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128\": container with ID starting with 0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128 not found: ID does not exist" containerID="0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.792187 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128"} err="failed to get container status \"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128\": rpc error: code = NotFound desc = could not find container \"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128\": container with ID starting with 0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128 not found: ID does not exist" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.792219 4764 scope.go:117] "RemoveContainer" containerID="41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.792540 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc"} err="failed to get container status \"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc\": rpc error: code = NotFound desc = could not find container \"41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc\": container with ID starting with 41f8a6758b990d98352af0d832c841be2fef1813a9c6eaeb663f575df7fbd5cc not found: ID does not exist" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.792567 4764 scope.go:117] "RemoveContainer" containerID="0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.792815 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128"} err="failed to get container status \"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128\": rpc error: code = NotFound desc = could not find container \"0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128\": container with ID starting with 0930b5e7eebe8c01ad7e44c3b108a4c6d98b8e79d15347bcd8f953d7dd61a128 not found: ID does not exist" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.809020 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:41 crc kubenswrapper[4764]: E0127 07:36:41.809506 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-log" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.809526 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-log" Jan 27 07:36:41 crc kubenswrapper[4764]: E0127 07:36:41.809574 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-metadata" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.809583 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-metadata" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.809805 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-log" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.809840 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" containerName="nova-metadata-metadata" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.811033 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.815867 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.816074 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.827982 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.894947 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.895080 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.895255 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnrpk\" (UniqueName: \"kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.895282 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.895390 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998126 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnrpk\" (UniqueName: \"kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998186 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998234 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998311 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998392 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:41 crc kubenswrapper[4764]: I0127 07:36:41.998963 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.005977 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.006677 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.008908 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.022750 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnrpk\" (UniqueName: \"kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk\") pod \"nova-metadata-0\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.143868 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.457404 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b4f3fa-0837-4012-8b3e-7f299c79386a" path="/var/lib/kubelet/pods/a4b4f3fa-0837-4012-8b3e-7f299c79386a/volumes" Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.483691 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.483897 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="76322b36-0480-4d14-8148-67e63de915fe" containerName="kube-state-metrics" containerID="cri-o://afff865eab5405c7296c151528437da377d38b37d49bb95cdacf969830d3883c" gracePeriod=30 Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.640631 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:42 crc kubenswrapper[4764]: W0127 07:36:42.665256 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f5731e_f6ec_4df7_89da_1803f833fbdf.slice/crio-323096e539ace5bbf19638bd197a2f2e054e533a20a84ea8f9896d63ffbe5d5a WatchSource:0}: Error finding container 323096e539ace5bbf19638bd197a2f2e054e533a20a84ea8f9896d63ffbe5d5a: Status 404 returned error can't find the container with id 323096e539ace5bbf19638bd197a2f2e054e533a20a84ea8f9896d63ffbe5d5a Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.734575 4764 generic.go:334] "Generic (PLEG): container finished" podID="76322b36-0480-4d14-8148-67e63de915fe" containerID="afff865eab5405c7296c151528437da377d38b37d49bb95cdacf969830d3883c" exitCode=2 Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.734650 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"76322b36-0480-4d14-8148-67e63de915fe","Type":"ContainerDied","Data":"afff865eab5405c7296c151528437da377d38b37d49bb95cdacf969830d3883c"} Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.737127 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerStarted","Data":"323096e539ace5bbf19638bd197a2f2e054e533a20a84ea8f9896d63ffbe5d5a"} Jan 27 07:36:42 crc kubenswrapper[4764]: I0127 07:36:42.958779 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.125063 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw7lm\" (UniqueName: \"kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm\") pod \"76322b36-0480-4d14-8148-67e63de915fe\" (UID: \"76322b36-0480-4d14-8148-67e63de915fe\") " Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.130392 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm" (OuterVolumeSpecName: "kube-api-access-zw7lm") pod "76322b36-0480-4d14-8148-67e63de915fe" (UID: "76322b36-0480-4d14-8148-67e63de915fe"). InnerVolumeSpecName "kube-api-access-zw7lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.227494 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw7lm\" (UniqueName: \"kubernetes.io/projected/76322b36-0480-4d14-8148-67e63de915fe-kube-api-access-zw7lm\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.756658 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerStarted","Data":"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981"} Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.756713 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerStarted","Data":"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649"} Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.762855 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"76322b36-0480-4d14-8148-67e63de915fe","Type":"ContainerDied","Data":"625d4ce34f009b633058ab873d7b5ab5ea269d71db0272049a0c717e2ba292cb"} Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.762919 4764 scope.go:117] "RemoveContainer" containerID="afff865eab5405c7296c151528437da377d38b37d49bb95cdacf969830d3883c" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.763082 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.767270 4764 generic.go:334] "Generic (PLEG): container finished" podID="e0383119-401b-43eb-8966-2edcc3a90f83" containerID="3138b61e039dd9648b93b62155cd01eb309f57ab907e304f4bae65b3f80cbba5" exitCode=0 Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.767314 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pkx7s" event={"ID":"e0383119-401b-43eb-8966-2edcc3a90f83","Type":"ContainerDied","Data":"3138b61e039dd9648b93b62155cd01eb309f57ab907e304f4bae65b3f80cbba5"} Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.829102 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.82908154 podStartE2EDuration="2.82908154s" podCreationTimestamp="2026-01-27 07:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:43.80310434 +0000 UTC m=+1216.398726866" watchObservedRunningTime="2026-01-27 07:36:43.82908154 +0000 UTC m=+1216.424704066" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.864571 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.874367 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.883258 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:43 crc kubenswrapper[4764]: E0127 07:36:43.883957 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76322b36-0480-4d14-8148-67e63de915fe" containerName="kube-state-metrics" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.884085 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="76322b36-0480-4d14-8148-67e63de915fe" containerName="kube-state-metrics" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.884383 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="76322b36-0480-4d14-8148-67e63de915fe" containerName="kube-state-metrics" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.885144 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.888474 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.888514 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 07:36:43 crc kubenswrapper[4764]: I0127 07:36:43.909747 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.048148 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.048189 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.048266 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.048292 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ltj\" (UniqueName: \"kubernetes.io/projected/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-api-access-t7ltj\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.149877 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.149963 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ltj\" (UniqueName: \"kubernetes.io/projected/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-api-access-t7ltj\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.150109 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.150137 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.156871 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.159872 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.159906 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.169913 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ltj\" (UniqueName: \"kubernetes.io/projected/8e1fb0ac-7115-4548-b084-a0b800ad68a8-kube-api-access-t7ltj\") pod \"kube-state-metrics-0\" (UID: \"8e1fb0ac-7115-4548-b084-a0b800ad68a8\") " pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.210836 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.360840 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.361153 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-central-agent" containerID="cri-o://b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423" gracePeriod=30 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.362347 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="proxy-httpd" containerID="cri-o://2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087" gracePeriod=30 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.362470 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="sg-core" containerID="cri-o://7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74" gracePeriod=30 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.362566 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-notification-agent" containerID="cri-o://a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b" gracePeriod=30 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.450020 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76322b36-0480-4d14-8148-67e63de915fe" path="/var/lib/kubelet/pods/76322b36-0480-4d14-8148-67e63de915fe/volumes" Jan 27 07:36:44 crc kubenswrapper[4764]: E0127 07:36:44.635839 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d103ed0_456f_4890_8370_d0185b7af9b3.slice/crio-conmon-2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.697260 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.697320 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.713600 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.777355 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e1fb0ac-7115-4548-b084-a0b800ad68a8","Type":"ContainerStarted","Data":"c5d6ecf7a46574cd4ea6255940285911e6fa9753699bca7ae6ddf9004aa638a1"} Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.780512 4764 generic.go:334] "Generic (PLEG): container finished" podID="4010c432-6801-418a-9a2f-d5b2f6798ce4" containerID="b45a604b9224386bad3e7ed18cecde1231ed0adbcc6a706f86122141fab50eaf" exitCode=0 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.780639 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" event={"ID":"4010c432-6801-418a-9a2f-d5b2f6798ce4","Type":"ContainerDied","Data":"b45a604b9224386bad3e7ed18cecde1231ed0adbcc6a706f86122141fab50eaf"} Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.789042 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerID="2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087" exitCode=0 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.789069 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerID="7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74" exitCode=2 Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.789104 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerDied","Data":"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087"} Jan 27 07:36:44 crc kubenswrapper[4764]: I0127 07:36:44.789130 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerDied","Data":"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.015262 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.055941 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.122935 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.180831 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.244470 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.244715 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="dnsmasq-dns" containerID="cri-o://4ae43e1bd6886cc6de4b6070681b1f52c0ccbbaa7779cca6a72840027ce9dbfa" gracePeriod=10 Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.278506 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle\") pod \"e0383119-401b-43eb-8966-2edcc3a90f83\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.278913 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data\") pod \"e0383119-401b-43eb-8966-2edcc3a90f83\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.306836 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data" (OuterVolumeSpecName: "config-data") pod "e0383119-401b-43eb-8966-2edcc3a90f83" (UID: "e0383119-401b-43eb-8966-2edcc3a90f83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.306928 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0383119-401b-43eb-8966-2edcc3a90f83" (UID: "e0383119-401b-43eb-8966-2edcc3a90f83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.382009 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts\") pod \"e0383119-401b-43eb-8966-2edcc3a90f83\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.382093 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znhck\" (UniqueName: \"kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck\") pod \"e0383119-401b-43eb-8966-2edcc3a90f83\" (UID: \"e0383119-401b-43eb-8966-2edcc3a90f83\") " Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.382574 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.382597 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.391539 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts" (OuterVolumeSpecName: "scripts") pod "e0383119-401b-43eb-8966-2edcc3a90f83" (UID: "e0383119-401b-43eb-8966-2edcc3a90f83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.393799 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck" (OuterVolumeSpecName: "kube-api-access-znhck") pod "e0383119-401b-43eb-8966-2edcc3a90f83" (UID: "e0383119-401b-43eb-8966-2edcc3a90f83"). InnerVolumeSpecName "kube-api-access-znhck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.484914 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0383119-401b-43eb-8966-2edcc3a90f83-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.484943 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znhck\" (UniqueName: \"kubernetes.io/projected/e0383119-401b-43eb-8966-2edcc3a90f83-kube-api-access-znhck\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.828980 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.829363 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.856244 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e1fb0ac-7115-4548-b084-a0b800ad68a8","Type":"ContainerStarted","Data":"7400e4d9a617e6f8a4cbf388ac6df737247a0c27f54f3663ab8a04cbeb7222db"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.856309 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.877885 4764 generic.go:334] "Generic (PLEG): container finished" podID="1201c328-fd0d-47ca-a25e-756f49187e19" containerID="4ae43e1bd6886cc6de4b6070681b1f52c0ccbbaa7779cca6a72840027ce9dbfa" exitCode=0 Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.878897 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" event={"ID":"1201c328-fd0d-47ca-a25e-756f49187e19","Type":"ContainerDied","Data":"4ae43e1bd6886cc6de4b6070681b1f52c0ccbbaa7779cca6a72840027ce9dbfa"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.878932 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" event={"ID":"1201c328-fd0d-47ca-a25e-756f49187e19","Type":"ContainerDied","Data":"7ddb3bcf1c9e3937c96e910e459d372de22cd8631230e580bcbe9c830682f021"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.878944 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ddb3bcf1c9e3937c96e910e459d372de22cd8631230e580bcbe9c830682f021" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.896393 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.464189983 podStartE2EDuration="2.896371547s" podCreationTimestamp="2026-01-27 07:36:43 +0000 UTC" firstStartedPulling="2026-01-27 07:36:44.716795978 +0000 UTC m=+1217.312418504" lastFinishedPulling="2026-01-27 07:36:45.148977532 +0000 UTC m=+1217.744600068" observedRunningTime="2026-01-27 07:36:45.875033829 +0000 UTC m=+1218.470656365" watchObservedRunningTime="2026-01-27 07:36:45.896371547 +0000 UTC m=+1218.491994073" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.898813 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.915791 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerID="b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423" exitCode=0 Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.915880 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerDied","Data":"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.917533 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pkx7s" event={"ID":"e0383119-401b-43eb-8966-2edcc3a90f83","Type":"ContainerDied","Data":"42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8"} Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.917580 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42161eeefe19d837677cc680536d1e3d2994d238feeab556f84b5db0d2db1ce8" Jan 27 07:36:45 crc kubenswrapper[4764]: I0127 07:36:45.917656 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pkx7s" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.029518 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.030702 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.030901 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-log" containerID="cri-o://4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f" gracePeriod=30 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.031034 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-api" containerID="cri-o://40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0" gracePeriod=30 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.033975 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.034100 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.034196 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.034228 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtkf\" (UniqueName: \"kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.034352 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.034397 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb\") pod \"1201c328-fd0d-47ca-a25e-756f49187e19\" (UID: \"1201c328-fd0d-47ca-a25e-756f49187e19\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.045904 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.051314 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf" (OuterVolumeSpecName: "kube-api-access-ldtkf") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "kube-api-access-ldtkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.066581 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.066808 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-log" containerID="cri-o://8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" gracePeriod=30 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.067564 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-metadata" containerID="cri-o://e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" gracePeriod=30 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.120430 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.132026 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config" (OuterVolumeSpecName: "config") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.136906 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.136941 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.136953 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtkf\" (UniqueName: \"kubernetes.io/projected/1201c328-fd0d-47ca-a25e-756f49187e19-kube-api-access-ldtkf\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.150659 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.151322 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.185364 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1201c328-fd0d-47ca-a25e-756f49187e19" (UID: "1201c328-fd0d-47ca-a25e-756f49187e19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.239004 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.239028 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.239047 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1201c328-fd0d-47ca-a25e-756f49187e19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.317034 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.443211 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts\") pod \"4010c432-6801-418a-9a2f-d5b2f6798ce4\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.443370 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle\") pod \"4010c432-6801-418a-9a2f-d5b2f6798ce4\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.443418 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data\") pod \"4010c432-6801-418a-9a2f-d5b2f6798ce4\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.443537 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgxd\" (UniqueName: \"kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd\") pod \"4010c432-6801-418a-9a2f-d5b2f6798ce4\" (UID: \"4010c432-6801-418a-9a2f-d5b2f6798ce4\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.449585 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts" (OuterVolumeSpecName: "scripts") pod "4010c432-6801-418a-9a2f-d5b2f6798ce4" (UID: "4010c432-6801-418a-9a2f-d5b2f6798ce4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.454690 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd" (OuterVolumeSpecName: "kube-api-access-thgxd") pod "4010c432-6801-418a-9a2f-d5b2f6798ce4" (UID: "4010c432-6801-418a-9a2f-d5b2f6798ce4"). InnerVolumeSpecName "kube-api-access-thgxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.473424 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data" (OuterVolumeSpecName: "config-data") pod "4010c432-6801-418a-9a2f-d5b2f6798ce4" (UID: "4010c432-6801-418a-9a2f-d5b2f6798ce4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.486470 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4010c432-6801-418a-9a2f-d5b2f6798ce4" (UID: "4010c432-6801-418a-9a2f-d5b2f6798ce4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.548577 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.548618 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.548632 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4010c432-6801-418a-9a2f-d5b2f6798ce4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.548644 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgxd\" (UniqueName: \"kubernetes.io/projected/4010c432-6801-418a-9a2f-d5b2f6798ce4-kube-api-access-thgxd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.822644 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.839583 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921081 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921466 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="proxy-httpd" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921495 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="proxy-httpd" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921519 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-metadata" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921526 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-metadata" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921536 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="init" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921542 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="init" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921556 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0383119-401b-43eb-8966-2edcc3a90f83" containerName="nova-manage" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921562 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0383119-401b-43eb-8966-2edcc3a90f83" containerName="nova-manage" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921578 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-log" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921583 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-log" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921596 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="sg-core" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921602 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="sg-core" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921614 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="dnsmasq-dns" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921620 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="dnsmasq-dns" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921628 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-central-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921633 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-central-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921639 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-notification-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921644 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-notification-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: E0127 07:36:46.921652 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4010c432-6801-418a-9a2f-d5b2f6798ce4" containerName="nova-cell1-conductor-db-sync" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921657 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4010c432-6801-418a-9a2f-d5b2f6798ce4" containerName="nova-cell1-conductor-db-sync" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921812 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="sg-core" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921826 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" containerName="dnsmasq-dns" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921839 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-log" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921846 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerName="nova-metadata-metadata" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921857 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="proxy-httpd" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921866 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-notification-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921873 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerName="ceilometer-central-agent" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921880 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0383119-401b-43eb-8966-2edcc3a90f83" containerName="nova-manage" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.921890 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4010c432-6801-418a-9a2f-d5b2f6798ce4" containerName="nova-cell1-conductor-db-sync" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.922460 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.944188 4764 generic.go:334] "Generic (PLEG): container finished" podID="0d103ed0-456f-4890-8370-d0185b7af9b3" containerID="a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b" exitCode=0 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.944266 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerDied","Data":"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.944294 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d103ed0-456f-4890-8370-d0185b7af9b3","Type":"ContainerDied","Data":"f3ab7b9dd39164cad3b0a9443df1e9c3217712d9170dc57c9b42b3fb28ee4ce0"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.944314 4764 scope.go:117] "RemoveContainer" containerID="2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.944395 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.948987 4764 generic.go:334] "Generic (PLEG): container finished" podID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerID="4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f" exitCode=143 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.949048 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerDied","Data":"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955355 4764 generic.go:334] "Generic (PLEG): container finished" podID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerID="e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" exitCode=0 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955398 4764 generic.go:334] "Generic (PLEG): container finished" podID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" containerID="8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" exitCode=143 Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955461 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerDied","Data":"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955492 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerDied","Data":"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955506 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f5731e-f6ec-4df7-89da-1803f833fbdf","Type":"ContainerDied","Data":"323096e539ace5bbf19638bd197a2f2e054e533a20a84ea8f9896d63ffbe5d5a"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.955577 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956248 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnrpk\" (UniqueName: \"kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk\") pod \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956316 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data\") pod \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956354 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdvj4\" (UniqueName: \"kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956422 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs\") pod \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956549 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956613 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle\") pod \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956654 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956675 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956738 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956773 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956806 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs\") pod \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\" (UID: \"e5f5731e-f6ec-4df7-89da-1803f833fbdf\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.956870 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts\") pod \"0d103ed0-456f-4890-8370-d0185b7af9b3\" (UID: \"0d103ed0-456f-4890-8370-d0185b7af9b3\") " Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.960803 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.960904 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.961758 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.962777 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.970138 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs" (OuterVolumeSpecName: "logs") pod "e5f5731e-f6ec-4df7-89da-1803f833fbdf" (UID: "e5f5731e-f6ec-4df7-89da-1803f833fbdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.970897 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bsxqz" event={"ID":"4010c432-6801-418a-9a2f-d5b2f6798ce4","Type":"ContainerDied","Data":"810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3"} Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.970933 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="810754b8d1c75886584c2777daebb0c26d970ce5c1752f464a1e9f5c4da33df3" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.970993 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-nqjwf" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.973419 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts" (OuterVolumeSpecName: "scripts") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.974429 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk" (OuterVolumeSpecName: "kube-api-access-bnrpk") pod "e5f5731e-f6ec-4df7-89da-1803f833fbdf" (UID: "e5f5731e-f6ec-4df7-89da-1803f833fbdf"). InnerVolumeSpecName "kube-api-access-bnrpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.975105 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4" (OuterVolumeSpecName: "kube-api-access-vdvj4") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "kube-api-access-vdvj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:46 crc kubenswrapper[4764]: I0127 07:36:46.986662 4764 scope.go:117] "RemoveContainer" containerID="7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.010928 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5f5731e-f6ec-4df7-89da-1803f833fbdf" (UID: "e5f5731e-f6ec-4df7-89da-1803f833fbdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.014650 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.038547 4764 scope.go:117] "RemoveContainer" containerID="a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.047296 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data" (OuterVolumeSpecName: "config-data") pod "e5f5731e-f6ec-4df7-89da-1803f833fbdf" (UID: "e5f5731e-f6ec-4df7-89da-1803f833fbdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.049930 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.057222 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-nqjwf"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059247 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059315 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjz9\" (UniqueName: \"kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059399 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059571 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f5731e-f6ec-4df7-89da-1803f833fbdf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059587 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059596 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnrpk\" (UniqueName: \"kubernetes.io/projected/e5f5731e-f6ec-4df7-89da-1803f833fbdf-kube-api-access-bnrpk\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059607 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059616 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdvj4\" (UniqueName: \"kubernetes.io/projected/0d103ed0-456f-4890-8370-d0185b7af9b3-kube-api-access-vdvj4\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059624 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059739 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059751 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.059761 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d103ed0-456f-4890-8370-d0185b7af9b3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.062298 4764 scope.go:117] "RemoveContainer" containerID="b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.082172 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e5f5731e-f6ec-4df7-89da-1803f833fbdf" (UID: "e5f5731e-f6ec-4df7-89da-1803f833fbdf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.082236 4764 scope.go:117] "RemoveContainer" containerID="2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.083096 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087\": container with ID starting with 2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087 not found: ID does not exist" containerID="2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.083131 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087"} err="failed to get container status \"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087\": rpc error: code = NotFound desc = could not find container \"2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087\": container with ID starting with 2f1c70a963d1263d1b8bfed95c11cd06fb947b82fdeded017beadcd022eaf087 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.083158 4764 scope.go:117] "RemoveContainer" containerID="7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.083619 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74\": container with ID starting with 7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74 not found: ID does not exist" containerID="7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.083658 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74"} err="failed to get container status \"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74\": rpc error: code = NotFound desc = could not find container \"7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74\": container with ID starting with 7cd79e5b73b8ca79c7e6cdc755b969660dc6be5bc3c120e9cc0c7d9a10c69d74 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.083694 4764 scope.go:117] "RemoveContainer" containerID="a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.084292 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b\": container with ID starting with a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b not found: ID does not exist" containerID="a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.084314 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b"} err="failed to get container status \"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b\": rpc error: code = NotFound desc = could not find container \"a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b\": container with ID starting with a3f383f129e99f7c4e31c7832e190d81504978cd541f67f18d3ed1bd1eaa739b not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.084327 4764 scope.go:117] "RemoveContainer" containerID="b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.084933 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423\": container with ID starting with b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423 not found: ID does not exist" containerID="b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.085386 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423"} err="failed to get container status \"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423\": rpc error: code = NotFound desc = could not find container \"b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423\": container with ID starting with b7a15a03ce40e5b79d70d788bb9707288b249899ba459d709ee57824cdaef423 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.085426 4764 scope.go:117] "RemoveContainer" containerID="e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.088084 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.103555 4764 scope.go:117] "RemoveContainer" containerID="8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.124709 4764 scope.go:117] "RemoveContainer" containerID="e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.125325 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981\": container with ID starting with e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981 not found: ID does not exist" containerID="e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.125376 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981"} err="failed to get container status \"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981\": rpc error: code = NotFound desc = could not find container \"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981\": container with ID starting with e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.125397 4764 scope.go:117] "RemoveContainer" containerID="8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" Jan 27 07:36:47 crc kubenswrapper[4764]: E0127 07:36:47.126005 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649\": container with ID starting with 8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649 not found: ID does not exist" containerID="8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.126042 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649"} err="failed to get container status \"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649\": rpc error: code = NotFound desc = could not find container \"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649\": container with ID starting with 8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.126071 4764 scope.go:117] "RemoveContainer" containerID="e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.126405 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981"} err="failed to get container status \"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981\": rpc error: code = NotFound desc = could not find container \"e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981\": container with ID starting with e6d659c5ec3d67c0499668ca5a088ec8a6f4f0372cd5a9e50517fcd99be22981 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.126428 4764 scope.go:117] "RemoveContainer" containerID="8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.126710 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649"} err="failed to get container status \"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649\": rpc error: code = NotFound desc = could not find container \"8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649\": container with ID starting with 8a2e694c94ef1aa0a287036aafd2a0cc0d1630fd78d3ffe894f9eaf651e06649 not found: ID does not exist" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.130388 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data" (OuterVolumeSpecName: "config-data") pod "0d103ed0-456f-4890-8370-d0185b7af9b3" (UID: "0d103ed0-456f-4890-8370-d0185b7af9b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161091 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161176 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjz9\" (UniqueName: \"kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161253 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161404 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161423 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d103ed0-456f-4890-8370-d0185b7af9b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.161432 4764 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f5731e-f6ec-4df7-89da-1803f833fbdf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.164481 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.172587 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.186603 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjz9\" (UniqueName: \"kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9\") pod \"nova-cell1-conductor-0\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.241243 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.304060 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.326761 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.339502 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.347493 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.349240 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.352878 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.353082 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.411251 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.444828 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.458785 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.460861 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.463171 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.463455 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.463744 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.480516 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.480615 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.480649 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.480691 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-565zz\" (UniqueName: \"kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.480731 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.482908 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586494 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586552 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586613 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586644 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586719 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfvn\" (UniqueName: \"kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586755 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586775 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586796 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586828 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-565zz\" (UniqueName: \"kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586864 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586895 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586940 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.586966 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.592235 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.593205 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.594937 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.595774 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.622107 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-565zz\" (UniqueName: \"kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz\") pod \"nova-metadata-0\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.688797 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.688836 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.688985 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689051 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hfvn\" (UniqueName: \"kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689079 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689120 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689167 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689194 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689639 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.689732 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.690460 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.695224 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.695351 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.695367 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.695925 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.698697 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.714561 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hfvn\" (UniqueName: \"kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn\") pod \"ceilometer-0\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.776532 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.814332 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.974699 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c0fe3ecb-bc7e-44ac-bd01-bd775782552f","Type":"ContainerStarted","Data":"2279f745a918667876b905c658b2c4e9ea5fa68a494c48782eb71c820555f87e"} Jan 27 07:36:47 crc kubenswrapper[4764]: I0127 07:36:47.979083 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerName="nova-scheduler-scheduler" containerID="cri-o://e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" gracePeriod=30 Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.136765 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.280674 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:36:48 crc kubenswrapper[4764]: W0127 07:36:48.289986 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8a6ca8_7408_49af_b264_0929a3714a76.slice/crio-db9b145bd386bd047d26fb91ff4f8840320db2136ce7678abc02d621ca21af0e WatchSource:0}: Error finding container db9b145bd386bd047d26fb91ff4f8840320db2136ce7678abc02d621ca21af0e: Status 404 returned error can't find the container with id db9b145bd386bd047d26fb91ff4f8840320db2136ce7678abc02d621ca21af0e Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.454869 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d103ed0-456f-4890-8370-d0185b7af9b3" path="/var/lib/kubelet/pods/0d103ed0-456f-4890-8370-d0185b7af9b3/volumes" Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.455564 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1201c328-fd0d-47ca-a25e-756f49187e19" path="/var/lib/kubelet/pods/1201c328-fd0d-47ca-a25e-756f49187e19/volumes" Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.456127 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f5731e-f6ec-4df7-89da-1803f833fbdf" path="/var/lib/kubelet/pods/e5f5731e-f6ec-4df7-89da-1803f833fbdf/volumes" Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.988535 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c0fe3ecb-bc7e-44ac-bd01-bd775782552f","Type":"ContainerStarted","Data":"74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39"} Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.988666 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.990711 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerStarted","Data":"ae0cd3780fa508a09297b790570ff6b1afe30590d7d8c6cc49de9eb6bdca3b1b"} Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.990762 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerStarted","Data":"ba6d07c84d3077635957d851d2c0fe630d0c06d6753ef533e9ec42bda009db9f"} Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.990780 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerStarted","Data":"ef3b51069dcc78f4321fc9ae47543257a55db867b36864a2cc918703c224b7e3"} Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.992410 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerStarted","Data":"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f"} Jan 27 07:36:48 crc kubenswrapper[4764]: I0127 07:36:48.992450 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerStarted","Data":"db9b145bd386bd047d26fb91ff4f8840320db2136ce7678abc02d621ca21af0e"} Jan 27 07:36:49 crc kubenswrapper[4764]: I0127 07:36:49.010422 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.010405675 podStartE2EDuration="3.010405675s" podCreationTimestamp="2026-01-27 07:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:49.004959312 +0000 UTC m=+1221.600581848" watchObservedRunningTime="2026-01-27 07:36:49.010405675 +0000 UTC m=+1221.606028201" Jan 27 07:36:49 crc kubenswrapper[4764]: I0127 07:36:49.044343 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.044320612 podStartE2EDuration="2.044320612s" podCreationTimestamp="2026-01-27 07:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:49.022913202 +0000 UTC m=+1221.618535728" watchObservedRunningTime="2026-01-27 07:36:49.044320612 +0000 UTC m=+1221.639943138" Jan 27 07:36:49 crc kubenswrapper[4764]: I0127 07:36:49.764558 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:36:50 crc kubenswrapper[4764]: I0127 07:36:50.006919 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerStarted","Data":"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c"} Jan 27 07:36:50 crc kubenswrapper[4764]: E0127 07:36:50.019199 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:36:50 crc kubenswrapper[4764]: E0127 07:36:50.021118 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:36:50 crc kubenswrapper[4764]: E0127 07:36:50.022617 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:36:50 crc kubenswrapper[4764]: E0127 07:36:50.022688 4764 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerName="nova-scheduler-scheduler" Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.023956 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerStarted","Data":"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97"} Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.890561 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.974210 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle\") pod \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.974301 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb77k\" (UniqueName: \"kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k\") pod \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.974368 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs\") pod \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.974400 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data\") pod \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\" (UID: \"36f1c950-b39c-4f2d-9c95-e004d95a3e8a\") " Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.977454 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs" (OuterVolumeSpecName: "logs") pod "36f1c950-b39c-4f2d-9c95-e004d95a3e8a" (UID: "36f1c950-b39c-4f2d-9c95-e004d95a3e8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:36:51 crc kubenswrapper[4764]: I0127 07:36:51.986666 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k" (OuterVolumeSpecName: "kube-api-access-bb77k") pod "36f1c950-b39c-4f2d-9c95-e004d95a3e8a" (UID: "36f1c950-b39c-4f2d-9c95-e004d95a3e8a"). InnerVolumeSpecName "kube-api-access-bb77k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.002672 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data" (OuterVolumeSpecName: "config-data") pod "36f1c950-b39c-4f2d-9c95-e004d95a3e8a" (UID: "36f1c950-b39c-4f2d-9c95-e004d95a3e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.003133 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36f1c950-b39c-4f2d-9c95-e004d95a3e8a" (UID: "36f1c950-b39c-4f2d-9c95-e004d95a3e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.035385 4764 generic.go:334] "Generic (PLEG): container finished" podID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerID="e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" exitCode=0 Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.036507 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cdec377-dbf4-4c83-b613-38ae44a5d6cf","Type":"ContainerDied","Data":"e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1"} Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.039731 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerStarted","Data":"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7"} Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.041458 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.043329 4764 generic.go:334] "Generic (PLEG): container finished" podID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerID="40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0" exitCode=0 Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.043369 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerDied","Data":"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0"} Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.043579 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36f1c950-b39c-4f2d-9c95-e004d95a3e8a","Type":"ContainerDied","Data":"f4d8aa4ec6823dd987b762f1aecea87a62e10493b7e3cce7c664ed9e29fcdaae"} Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.043649 4764 scope.go:117] "RemoveContainer" containerID="40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.043401 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.048694 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.067589 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.614655504 podStartE2EDuration="5.067571053s" podCreationTimestamp="2026-01-27 07:36:47 +0000 UTC" firstStartedPulling="2026-01-27 07:36:48.294594466 +0000 UTC m=+1220.890216992" lastFinishedPulling="2026-01-27 07:36:51.747510015 +0000 UTC m=+1224.343132541" observedRunningTime="2026-01-27 07:36:52.067005099 +0000 UTC m=+1224.662627615" watchObservedRunningTime="2026-01-27 07:36:52.067571053 +0000 UTC m=+1224.663193579" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.069175 4764 scope.go:117] "RemoveContainer" containerID="4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.075433 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle\") pod \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.075747 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data\") pod \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.075814 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm7n7\" (UniqueName: \"kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7\") pod \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\" (UID: \"4cdec377-dbf4-4c83-b613-38ae44a5d6cf\") " Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.084610 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.084638 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.084659 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.084670 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb77k\" (UniqueName: \"kubernetes.io/projected/36f1c950-b39c-4f2d-9c95-e004d95a3e8a-kube-api-access-bb77k\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.090106 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7" (OuterVolumeSpecName: "kube-api-access-jm7n7") pod "4cdec377-dbf4-4c83-b613-38ae44a5d6cf" (UID: "4cdec377-dbf4-4c83-b613-38ae44a5d6cf"). InnerVolumeSpecName "kube-api-access-jm7n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.104895 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data" (OuterVolumeSpecName: "config-data") pod "4cdec377-dbf4-4c83-b613-38ae44a5d6cf" (UID: "4cdec377-dbf4-4c83-b613-38ae44a5d6cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.117642 4764 scope.go:117] "RemoveContainer" containerID="40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0" Jan 27 07:36:52 crc kubenswrapper[4764]: E0127 07:36:52.118210 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0\": container with ID starting with 40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0 not found: ID does not exist" containerID="40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.118247 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0"} err="failed to get container status \"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0\": rpc error: code = NotFound desc = could not find container \"40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0\": container with ID starting with 40a7f65bd508ecd5bdcd664708408ca61e281911b4cf541554464b13da4ce1e0 not found: ID does not exist" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.118269 4764 scope.go:117] "RemoveContainer" containerID="4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f" Jan 27 07:36:52 crc kubenswrapper[4764]: E0127 07:36:52.118614 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f\": container with ID starting with 4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f not found: ID does not exist" containerID="4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.118638 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f"} err="failed to get container status \"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f\": rpc error: code = NotFound desc = could not find container \"4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f\": container with ID starting with 4f0d1b0bacde8b867f748e3212eeda7ac8e4a624b2a51c3c5d4f2fdb59f87a3f not found: ID does not exist" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.122510 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.132160 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.140958 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cdec377-dbf4-4c83-b613-38ae44a5d6cf" (UID: "4cdec377-dbf4-4c83-b613-38ae44a5d6cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.143684 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:52 crc kubenswrapper[4764]: E0127 07:36:52.144177 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerName="nova-scheduler-scheduler" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144202 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerName="nova-scheduler-scheduler" Jan 27 07:36:52 crc kubenswrapper[4764]: E0127 07:36:52.144241 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-log" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144251 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-log" Jan 27 07:36:52 crc kubenswrapper[4764]: E0127 07:36:52.144274 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-api" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144282 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-api" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144536 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-api" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144561 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" containerName="nova-api-log" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.144580 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" containerName="nova-scheduler-scheduler" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.145789 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.152871 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.153754 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.185648 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2q45\" (UniqueName: \"kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.185971 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.186145 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.186259 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.186414 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.186526 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.186605 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm7n7\" (UniqueName: \"kubernetes.io/projected/4cdec377-dbf4-4c83-b613-38ae44a5d6cf-kube-api-access-jm7n7\") on node \"crc\" DevicePath \"\"" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.270068 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.288712 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.289262 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.289287 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.289362 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.289600 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2q45\" (UniqueName: \"kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.292614 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.292916 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.311514 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2q45\" (UniqueName: \"kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45\") pod \"nova-api-0\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.449756 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f1c950-b39c-4f2d-9c95-e004d95a3e8a" path="/var/lib/kubelet/pods/36f1c950-b39c-4f2d-9c95-e004d95a3e8a/volumes" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.471008 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.692361 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.692403 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:36:52 crc kubenswrapper[4764]: W0127 07:36:52.906453 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfada4ea4_da37_4cef_9429_024c9752d397.slice/crio-62a4041d6a641606739a109314a1a49f2a52432dfc191c2af3e28f48b56158ed WatchSource:0}: Error finding container 62a4041d6a641606739a109314a1a49f2a52432dfc191c2af3e28f48b56158ed: Status 404 returned error can't find the container with id 62a4041d6a641606739a109314a1a49f2a52432dfc191c2af3e28f48b56158ed Jan 27 07:36:52 crc kubenswrapper[4764]: I0127 07:36:52.916764 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.056812 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cdec377-dbf4-4c83-b613-38ae44a5d6cf","Type":"ContainerDied","Data":"0a69f0cbe791a3ebfe4884bad8f243b0e3618d058aff67067c5cc49a1f2f4d0c"} Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.056870 4764 scope.go:117] "RemoveContainer" containerID="e13b3c6f19e8b9fe351e471c8722ab96d60f75c463ba4b2549db030ef2ef08a1" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.056881 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.060084 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerStarted","Data":"62a4041d6a641606739a109314a1a49f2a52432dfc191c2af3e28f48b56158ed"} Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.110205 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.156879 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.170732 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.172202 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.174521 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.198337 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.208626 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrwc\" (UniqueName: \"kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.208705 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.208767 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.310123 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.310576 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.310768 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrwc\" (UniqueName: \"kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.314458 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.314659 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.327281 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrwc\" (UniqueName: \"kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc\") pod \"nova-scheduler-0\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.550779 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.766647 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:36:53 crc kubenswrapper[4764]: I0127 07:36:53.766975 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.027881 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:36:54 crc kubenswrapper[4764]: W0127 07:36:54.028896 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53a8a08_3378_4a4c_b20d_d90a64662740.slice/crio-28dcec14af9ef4906e5183d7d145c89a3c347db0c7c13a070c480b7d907a1b69 WatchSource:0}: Error finding container 28dcec14af9ef4906e5183d7d145c89a3c347db0c7c13a070c480b7d907a1b69: Status 404 returned error can't find the container with id 28dcec14af9ef4906e5183d7d145c89a3c347db0c7c13a070c480b7d907a1b69 Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.076048 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e53a8a08-3378-4a4c-b20d-d90a64662740","Type":"ContainerStarted","Data":"28dcec14af9ef4906e5183d7d145c89a3c347db0c7c13a070c480b7d907a1b69"} Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.081802 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerStarted","Data":"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f"} Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.081931 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerStarted","Data":"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c"} Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.112204 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.112187976 podStartE2EDuration="2.112187976s" podCreationTimestamp="2026-01-27 07:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:54.10392816 +0000 UTC m=+1226.699550706" watchObservedRunningTime="2026-01-27 07:36:54.112187976 +0000 UTC m=+1226.707810502" Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.224534 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 07:36:54 crc kubenswrapper[4764]: I0127 07:36:54.451058 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdec377-dbf4-4c83-b613-38ae44a5d6cf" path="/var/lib/kubelet/pods/4cdec377-dbf4-4c83-b613-38ae44a5d6cf/volumes" Jan 27 07:36:55 crc kubenswrapper[4764]: I0127 07:36:55.092410 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e53a8a08-3378-4a4c-b20d-d90a64662740","Type":"ContainerStarted","Data":"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57"} Jan 27 07:36:55 crc kubenswrapper[4764]: I0127 07:36:55.111815 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.111791954 podStartE2EDuration="2.111791954s" podCreationTimestamp="2026-01-27 07:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:36:55.106848334 +0000 UTC m=+1227.702470870" watchObservedRunningTime="2026-01-27 07:36:55.111791954 +0000 UTC m=+1227.707414490" Jan 27 07:36:57 crc kubenswrapper[4764]: I0127 07:36:57.690760 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:36:57 crc kubenswrapper[4764]: I0127 07:36:57.691945 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:36:58 crc kubenswrapper[4764]: I0127 07:36:58.551178 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 07:36:58 crc kubenswrapper[4764]: I0127 07:36:58.702701 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:36:58 crc kubenswrapper[4764]: I0127 07:36:58.702780 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:02 crc kubenswrapper[4764]: I0127 07:37:02.472006 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:02 crc kubenswrapper[4764]: I0127 07:37:02.472577 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:03 crc kubenswrapper[4764]: I0127 07:37:03.551198 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 07:37:03 crc kubenswrapper[4764]: I0127 07:37:03.554687 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:03 crc kubenswrapper[4764]: I0127 07:37:03.554727 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:03 crc kubenswrapper[4764]: I0127 07:37:03.598800 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 07:37:04 crc kubenswrapper[4764]: I0127 07:37:04.243311 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 07:37:07 crc kubenswrapper[4764]: I0127 07:37:07.699505 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:37:07 crc kubenswrapper[4764]: I0127 07:37:07.700173 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:37:07 crc kubenswrapper[4764]: I0127 07:37:07.706390 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:37:08 crc kubenswrapper[4764]: I0127 07:37:08.241523 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.104796 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.150822 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prgs9\" (UniqueName: \"kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9\") pod \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.151112 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data\") pod \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.151179 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle\") pod \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\" (UID: \"82df1dcc-557b-4fbd-bc4a-7a446e17f729\") " Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.157780 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9" (OuterVolumeSpecName: "kube-api-access-prgs9") pod "82df1dcc-557b-4fbd-bc4a-7a446e17f729" (UID: "82df1dcc-557b-4fbd-bc4a-7a446e17f729"). InnerVolumeSpecName "kube-api-access-prgs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.178599 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data" (OuterVolumeSpecName: "config-data") pod "82df1dcc-557b-4fbd-bc4a-7a446e17f729" (UID: "82df1dcc-557b-4fbd-bc4a-7a446e17f729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.191980 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82df1dcc-557b-4fbd-bc4a-7a446e17f729" (UID: "82df1dcc-557b-4fbd-bc4a-7a446e17f729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.253274 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prgs9\" (UniqueName: \"kubernetes.io/projected/82df1dcc-557b-4fbd-bc4a-7a446e17f729-kube-api-access-prgs9\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.253320 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.253338 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82df1dcc-557b-4fbd-bc4a-7a446e17f729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.255933 4764 generic.go:334] "Generic (PLEG): container finished" podID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" containerID="f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42" exitCode=137 Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.255996 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82df1dcc-557b-4fbd-bc4a-7a446e17f729","Type":"ContainerDied","Data":"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42"} Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.256025 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.256052 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82df1dcc-557b-4fbd-bc4a-7a446e17f729","Type":"ContainerDied","Data":"3b77ece36a2464d70de420c157c7f62eff4fbec6656ff51f6b3ed3e870c9371e"} Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.256077 4764 scope.go:117] "RemoveContainer" containerID="f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.300705 4764 scope.go:117] "RemoveContainer" containerID="f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42" Jan 27 07:37:10 crc kubenswrapper[4764]: E0127 07:37:10.302013 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42\": container with ID starting with f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42 not found: ID does not exist" containerID="f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.302063 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42"} err="failed to get container status \"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42\": rpc error: code = NotFound desc = could not find container \"f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42\": container with ID starting with f95dfcbec9cc46f9ef99766a292651d9de3486396da154757ab10719a8b89b42 not found: ID does not exist" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.303923 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.322703 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.333239 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:37:10 crc kubenswrapper[4764]: E0127 07:37:10.333936 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.333963 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.334426 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.340373 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.345025 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.345594 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.346525 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.355093 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.355155 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.355204 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92kn\" (UniqueName: \"kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.355267 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.355295 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.362363 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.456892 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.457000 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.457079 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92kn\" (UniqueName: \"kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.457236 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.457290 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.462071 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82df1dcc-557b-4fbd-bc4a-7a446e17f729" path="/var/lib/kubelet/pods/82df1dcc-557b-4fbd-bc4a-7a446e17f729/volumes" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.463814 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.464527 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.464798 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.467691 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.492401 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92kn\" (UniqueName: \"kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn\") pod \"nova-cell1-novncproxy-0\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:10 crc kubenswrapper[4764]: I0127 07:37:10.661138 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:11 crc kubenswrapper[4764]: W0127 07:37:11.197586 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85733ad9_3eda_48d5_abf5_2ffe5e2202e3.slice/crio-8cb6c65d01706c744c8c9a25d81cd827c4b3b349eee3de7feeeda97aa09a99a9 WatchSource:0}: Error finding container 8cb6c65d01706c744c8c9a25d81cd827c4b3b349eee3de7feeeda97aa09a99a9: Status 404 returned error can't find the container with id 8cb6c65d01706c744c8c9a25d81cd827c4b3b349eee3de7feeeda97aa09a99a9 Jan 27 07:37:11 crc kubenswrapper[4764]: I0127 07:37:11.199508 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:37:11 crc kubenswrapper[4764]: I0127 07:37:11.271268 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85733ad9-3eda-48d5-abf5-2ffe5e2202e3","Type":"ContainerStarted","Data":"8cb6c65d01706c744c8c9a25d81cd827c4b3b349eee3de7feeeda97aa09a99a9"} Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.286611 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85733ad9-3eda-48d5-abf5-2ffe5e2202e3","Type":"ContainerStarted","Data":"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5"} Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.341141 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.341118034 podStartE2EDuration="2.341118034s" podCreationTimestamp="2026-01-27 07:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:12.324013266 +0000 UTC m=+1244.919635832" watchObservedRunningTime="2026-01-27 07:37:12.341118034 +0000 UTC m=+1244.936740570" Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.476615 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.477736 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.478291 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:37:12 crc kubenswrapper[4764]: I0127 07:37:12.480170 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.298900 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.304659 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.601991 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.603603 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.631763 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.755870 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.755929 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.756008 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.756038 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.756112 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lwp\" (UniqueName: \"kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.756145 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858116 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lwp\" (UniqueName: \"kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858385 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858540 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858641 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858724 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.858816 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.859357 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.859667 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.861011 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.861143 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.861267 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.875622 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lwp\" (UniqueName: \"kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp\") pod \"dnsmasq-dns-5ddd577785-7p47r\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:13 crc kubenswrapper[4764]: I0127 07:37:13.962774 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:14 crc kubenswrapper[4764]: I0127 07:37:14.435355 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:37:14 crc kubenswrapper[4764]: W0127 07:37:14.436760 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308a7a6e_bf8e_489d_bc3b_e858341c39b8.slice/crio-d43c3bb5c1613a3983c374318ff9d9cde83ce92e857cf935e1924fd0dd240bc4 WatchSource:0}: Error finding container d43c3bb5c1613a3983c374318ff9d9cde83ce92e857cf935e1924fd0dd240bc4: Status 404 returned error can't find the container with id d43c3bb5c1613a3983c374318ff9d9cde83ce92e857cf935e1924fd0dd240bc4 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.327473 4764 generic.go:334] "Generic (PLEG): container finished" podID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerID="d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c" exitCode=0 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.327561 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" event={"ID":"308a7a6e-bf8e-489d-bc3b-e858341c39b8","Type":"ContainerDied","Data":"d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c"} Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.328382 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" event={"ID":"308a7a6e-bf8e-489d-bc3b-e858341c39b8","Type":"ContainerStarted","Data":"d43c3bb5c1613a3983c374318ff9d9cde83ce92e857cf935e1924fd0dd240bc4"} Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.636726 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.637488 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-central-agent" containerID="cri-o://b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f" gracePeriod=30 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.637732 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-notification-agent" containerID="cri-o://f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c" gracePeriod=30 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.637904 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" containerID="cri-o://09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7" gracePeriod=30 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.637949 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="sg-core" containerID="cri-o://873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97" gracePeriod=30 Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.655517 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": EOF" Jan 27 07:37:15 crc kubenswrapper[4764]: I0127 07:37:15.661623 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.187996 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.338738 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" event={"ID":"308a7a6e-bf8e-489d-bc3b-e858341c39b8","Type":"ContainerStarted","Data":"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957"} Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.339068 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341355 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerID="09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7" exitCode=0 Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341380 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerID="873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97" exitCode=2 Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341392 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerID="b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f" exitCode=0 Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341399 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerDied","Data":"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7"} Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341423 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerDied","Data":"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97"} Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341446 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerDied","Data":"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f"} Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341597 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-log" containerID="cri-o://a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c" gracePeriod=30 Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.341635 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-api" containerID="cri-o://fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f" gracePeriod=30 Jan 27 07:37:16 crc kubenswrapper[4764]: I0127 07:37:16.361405 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" podStartSLOduration=3.361385875 podStartE2EDuration="3.361385875s" podCreationTimestamp="2026-01-27 07:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:16.357594656 +0000 UTC m=+1248.953217182" watchObservedRunningTime="2026-01-27 07:37:16.361385875 +0000 UTC m=+1248.957008401" Jan 27 07:37:17 crc kubenswrapper[4764]: I0127 07:37:17.351228 4764 generic.go:334] "Generic (PLEG): container finished" podID="fada4ea4-da37-4cef-9429-024c9752d397" containerID="a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c" exitCode=143 Jan 27 07:37:17 crc kubenswrapper[4764]: I0127 07:37:17.352112 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerDied","Data":"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c"} Jan 27 07:37:17 crc kubenswrapper[4764]: I0127 07:37:17.777599 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": dial tcp 10.217.0.210:3000: connect: connection refused" Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.967313 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.993173 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs\") pod \"fada4ea4-da37-4cef-9429-024c9752d397\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.993346 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data\") pod \"fada4ea4-da37-4cef-9429-024c9752d397\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.993420 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2q45\" (UniqueName: \"kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45\") pod \"fada4ea4-da37-4cef-9429-024c9752d397\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.993495 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle\") pod \"fada4ea4-da37-4cef-9429-024c9752d397\" (UID: \"fada4ea4-da37-4cef-9429-024c9752d397\") " Jan 27 07:37:19 crc kubenswrapper[4764]: I0127 07:37:19.993946 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs" (OuterVolumeSpecName: "logs") pod "fada4ea4-da37-4cef-9429-024c9752d397" (UID: "fada4ea4-da37-4cef-9429-024c9752d397"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.005676 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45" (OuterVolumeSpecName: "kube-api-access-q2q45") pod "fada4ea4-da37-4cef-9429-024c9752d397" (UID: "fada4ea4-da37-4cef-9429-024c9752d397"). InnerVolumeSpecName "kube-api-access-q2q45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.055251 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fada4ea4-da37-4cef-9429-024c9752d397" (UID: "fada4ea4-da37-4cef-9429-024c9752d397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.079621 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data" (OuterVolumeSpecName: "config-data") pod "fada4ea4-da37-4cef-9429-024c9752d397" (UID: "fada4ea4-da37-4cef-9429-024c9752d397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.096214 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.096271 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2q45\" (UniqueName: \"kubernetes.io/projected/fada4ea4-da37-4cef-9429-024c9752d397-kube-api-access-q2q45\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.096295 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fada4ea4-da37-4cef-9429-024c9752d397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.096312 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada4ea4-da37-4cef-9429-024c9752d397-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.377923 4764 generic.go:334] "Generic (PLEG): container finished" podID="fada4ea4-da37-4cef-9429-024c9752d397" containerID="fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f" exitCode=0 Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.377969 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerDied","Data":"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f"} Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.377993 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fada4ea4-da37-4cef-9429-024c9752d397","Type":"ContainerDied","Data":"62a4041d6a641606739a109314a1a49f2a52432dfc191c2af3e28f48b56158ed"} Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.378008 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.378011 4764 scope.go:117] "RemoveContainer" containerID="fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.416015 4764 scope.go:117] "RemoveContainer" containerID="a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.420782 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.431110 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.456976 4764 scope.go:117] "RemoveContainer" containerID="fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f" Jan 27 07:37:20 crc kubenswrapper[4764]: E0127 07:37:20.457810 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f\": container with ID starting with fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f not found: ID does not exist" containerID="fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.457865 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f"} err="failed to get container status \"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f\": rpc error: code = NotFound desc = could not find container \"fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f\": container with ID starting with fbbc2a0599495201da2b41e4fb9e6ebb39483ed3adc7ad1f004c4bae6ee8588f not found: ID does not exist" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.457930 4764 scope.go:117] "RemoveContainer" containerID="a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c" Jan 27 07:37:20 crc kubenswrapper[4764]: E0127 07:37:20.463674 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c\": container with ID starting with a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c not found: ID does not exist" containerID="a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.463725 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c"} err="failed to get container status \"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c\": rpc error: code = NotFound desc = could not find container \"a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c\": container with ID starting with a0e87c1e015255d5abb5c4a8cedab8dfaa0d3a9c10d95de6259a949562ef032c not found: ID does not exist" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.465023 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fada4ea4-da37-4cef-9429-024c9752d397" path="/var/lib/kubelet/pods/fada4ea4-da37-4cef-9429-024c9752d397/volumes" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.465774 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:20 crc kubenswrapper[4764]: E0127 07:37:20.466169 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-log" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.466189 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-log" Jan 27 07:37:20 crc kubenswrapper[4764]: E0127 07:37:20.466205 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-api" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.466212 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-api" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.466495 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-api" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.466530 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="fada4ea4-da37-4cef-9429-024c9752d397" containerName="nova-api-log" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.480951 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.481069 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.487613 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.487897 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.487944 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617267 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617326 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6q7\" (UniqueName: \"kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617368 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617444 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617495 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.617511 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.663663 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.680205 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719390 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719708 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719738 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719753 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719847 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.719890 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq6q7\" (UniqueName: \"kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.720523 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.723350 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.723514 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.723705 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.725006 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.743162 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq6q7\" (UniqueName: \"kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7\") pod \"nova-api-0\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.811628 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:20 crc kubenswrapper[4764]: I0127 07:37:20.948884 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.124658 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.125464 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.125742 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126316 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126356 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hfvn\" (UniqueName: \"kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126381 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126400 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126487 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.126530 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data\") pod \"9f8a6ca8-7408-49af-b264-0929a3714a76\" (UID: \"9f8a6ca8-7408-49af-b264-0929a3714a76\") " Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.127107 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.127548 4764 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.127574 4764 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f8a6ca8-7408-49af-b264-0929a3714a76-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.134163 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts" (OuterVolumeSpecName: "scripts") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.134326 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn" (OuterVolumeSpecName: "kube-api-access-9hfvn") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "kube-api-access-9hfvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.159172 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.190680 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.205830 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.223409 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data" (OuterVolumeSpecName: "config-data") pod "9f8a6ca8-7408-49af-b264-0929a3714a76" (UID: "9f8a6ca8-7408-49af-b264-0929a3714a76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229765 4764 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229788 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229798 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hfvn\" (UniqueName: \"kubernetes.io/projected/9f8a6ca8-7408-49af-b264-0929a3714a76-kube-api-access-9hfvn\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229808 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229817 4764 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.229825 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8a6ca8-7408-49af-b264-0929a3714a76-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.274837 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.389099 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerStarted","Data":"a2136b46e4dd4b5762a5ec032ec1fb7f7dd16918989efe3bca293dbe2526799c"} Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.393062 4764 generic.go:334] "Generic (PLEG): container finished" podID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerID="f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c" exitCode=0 Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.393196 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerDied","Data":"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c"} Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.393276 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f8a6ca8-7408-49af-b264-0929a3714a76","Type":"ContainerDied","Data":"db9b145bd386bd047d26fb91ff4f8840320db2136ce7678abc02d621ca21af0e"} Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.393361 4764 scope.go:117] "RemoveContainer" containerID="09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.393716 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.419222 4764 scope.go:117] "RemoveContainer" containerID="873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.420900 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.436480 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.447331 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.449938 4764 scope.go:117] "RemoveContainer" containerID="f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.505720 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.506407 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-central-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.506418 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-central-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.506433 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-notification-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.515118 4764 scope.go:117] "RemoveContainer" containerID="b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.506443 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-notification-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.520737 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.520751 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.520780 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="sg-core" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.520790 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="sg-core" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.521080 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="proxy-httpd" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.521094 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="sg-core" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.521103 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-central-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.521116 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" containerName="ceilometer-notification-agent" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.522587 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.522678 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.525691 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.525935 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.526053 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534104 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534139 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-config-data\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534167 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-scripts\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534193 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534231 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534287 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534334 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppbn\" (UniqueName: \"kubernetes.io/projected/ce77f72a-8ed8-4216-b443-a1c5737a50e7-kube-api-access-vppbn\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.534354 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.591637 4764 scope.go:117] "RemoveContainer" containerID="09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.593903 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7\": container with ID starting with 09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7 not found: ID does not exist" containerID="09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.593950 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7"} err="failed to get container status \"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7\": rpc error: code = NotFound desc = could not find container \"09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7\": container with ID starting with 09ff58923e54df0a9ba7dafb0d315f563d797307b642e489f20d9f2b210d77b7 not found: ID does not exist" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.593978 4764 scope.go:117] "RemoveContainer" containerID="873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.598533 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97\": container with ID starting with 873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97 not found: ID does not exist" containerID="873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.598575 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97"} err="failed to get container status \"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97\": rpc error: code = NotFound desc = could not find container \"873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97\": container with ID starting with 873a0081bcdd52da61a3af934ef2dda5d65963a826b4022a35f6b9d5b1338f97 not found: ID does not exist" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.598601 4764 scope.go:117] "RemoveContainer" containerID="f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.599107 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c\": container with ID starting with f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c not found: ID does not exist" containerID="f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.599141 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c"} err="failed to get container status \"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c\": rpc error: code = NotFound desc = could not find container \"f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c\": container with ID starting with f8801b6e1c148d2acd0012bd582207be3883692fcdd470292b480c108d5f071c not found: ID does not exist" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.599155 4764 scope.go:117] "RemoveContainer" containerID="b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f" Jan 27 07:37:21 crc kubenswrapper[4764]: E0127 07:37:21.600018 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f\": container with ID starting with b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f not found: ID does not exist" containerID="b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.600040 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f"} err="failed to get container status \"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f\": rpc error: code = NotFound desc = could not find container \"b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f\": container with ID starting with b39bdc67364bcd7c37f87bb0b45a6c64e7394f9749295eaf0bca6b145417be3f not found: ID does not exist" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.638769 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.638841 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vppbn\" (UniqueName: \"kubernetes.io/projected/ce77f72a-8ed8-4216-b443-a1c5737a50e7-kube-api-access-vppbn\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640373 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640570 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640603 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-config-data\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640647 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-scripts\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640696 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.640754 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.641307 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.643110 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce77f72a-8ed8-4216-b443-a1c5737a50e7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.645839 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.654067 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.661536 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-g2ngw"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.661607 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.664243 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.666083 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vppbn\" (UniqueName: \"kubernetes.io/projected/ce77f72a-8ed8-4216-b443-a1c5737a50e7-kube-api-access-vppbn\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.666898 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.668420 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.676277 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-scripts\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.681841 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77f72a-8ed8-4216-b443-a1c5737a50e7-config-data\") pod \"ceilometer-0\" (UID: \"ce77f72a-8ed8-4216-b443-a1c5737a50e7\") " pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.698886 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g2ngw"] Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.742377 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.742512 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.742551 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzs56\" (UniqueName: \"kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.742581 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.844887 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.844951 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzs56\" (UniqueName: \"kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.844987 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.845016 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.845052 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.848192 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.848975 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.850693 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:21 crc kubenswrapper[4764]: I0127 07:37:21.883236 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzs56\" (UniqueName: \"kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56\") pod \"nova-cell1-cell-mapping-g2ngw\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.079737 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.409695 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerStarted","Data":"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd"} Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.409787 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerStarted","Data":"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c"} Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.437797 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.437772519 podStartE2EDuration="2.437772519s" podCreationTimestamp="2026-01-27 07:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:22.432028699 +0000 UTC m=+1255.027651235" watchObservedRunningTime="2026-01-27 07:37:22.437772519 +0000 UTC m=+1255.033395045" Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.458975 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8a6ca8-7408-49af-b264-0929a3714a76" path="/var/lib/kubelet/pods/9f8a6ca8-7408-49af-b264-0929a3714a76/volumes" Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.478263 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 07:37:22 crc kubenswrapper[4764]: W0127 07:37:22.480225 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce77f72a_8ed8_4216_b443_a1c5737a50e7.slice/crio-13e06fb64a521f863395f6104a34ef933bd9077230e0e3c9327bbd608337dfde WatchSource:0}: Error finding container 13e06fb64a521f863395f6104a34ef933bd9077230e0e3c9327bbd608337dfde: Status 404 returned error can't find the container with id 13e06fb64a521f863395f6104a34ef933bd9077230e0e3c9327bbd608337dfde Jan 27 07:37:22 crc kubenswrapper[4764]: W0127 07:37:22.576276 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6454f8af_d141_4b00_a06b_b5e2af100376.slice/crio-d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4 WatchSource:0}: Error finding container d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4: Status 404 returned error can't find the container with id d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4 Jan 27 07:37:22 crc kubenswrapper[4764]: I0127 07:37:22.580696 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g2ngw"] Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.426152 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g2ngw" event={"ID":"6454f8af-d141-4b00-a06b-b5e2af100376","Type":"ContainerStarted","Data":"cc915f0160e9989b3ef76cb8c02558efbd07ba674a8bbda27262e8d45eef3a06"} Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.426241 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g2ngw" event={"ID":"6454f8af-d141-4b00-a06b-b5e2af100376","Type":"ContainerStarted","Data":"d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4"} Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.432049 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce77f72a-8ed8-4216-b443-a1c5737a50e7","Type":"ContainerStarted","Data":"1b2945241bbe20250e8bba365c36b15077a8b1106b962be40ed492bccfca943d"} Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.432105 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce77f72a-8ed8-4216-b443-a1c5737a50e7","Type":"ContainerStarted","Data":"13e06fb64a521f863395f6104a34ef933bd9077230e0e3c9327bbd608337dfde"} Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.450316 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-g2ngw" podStartSLOduration=2.4502957739999998 podStartE2EDuration="2.450295774s" podCreationTimestamp="2026-01-27 07:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:23.44061616 +0000 UTC m=+1256.036238696" watchObservedRunningTime="2026-01-27 07:37:23.450295774 +0000 UTC m=+1256.045918310" Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.763119 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.763196 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.763259 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.764435 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.764552 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756" gracePeriod=600 Jan 27 07:37:23 crc kubenswrapper[4764]: I0127 07:37:23.964677 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.054045 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.054410 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="dnsmasq-dns" containerID="cri-o://ca4f7d7ced1d99db483dd3c1cffd228644ad960bcccadc897ce8930f371695ee" gracePeriod=10 Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.469471 4764 generic.go:334] "Generic (PLEG): container finished" podID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerID="ca4f7d7ced1d99db483dd3c1cffd228644ad960bcccadc897ce8930f371695ee" exitCode=0 Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.469543 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" event={"ID":"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9","Type":"ContainerDied","Data":"ca4f7d7ced1d99db483dd3c1cffd228644ad960bcccadc897ce8930f371695ee"} Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.503901 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce77f72a-8ed8-4216-b443-a1c5737a50e7","Type":"ContainerStarted","Data":"ef8b2f5146b226ebf3288fff21357cf9494dbafae32f73063b76e77deffc403a"} Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.520803 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756" exitCode=0 Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.521003 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756"} Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.521066 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4"} Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.521090 4764 scope.go:117] "RemoveContainer" containerID="1c23ae4d5813d2d046c09e56678bc336de540068540ec8945ee83efb0e572821" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.612905 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.707759 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.708113 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.708279 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.708412 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.708591 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.708922 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwg4\" (UniqueName: \"kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4\") pod \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\" (UID: \"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9\") " Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.716636 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4" (OuterVolumeSpecName: "kube-api-access-sjwg4") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "kube-api-access-sjwg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.769140 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.773953 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config" (OuterVolumeSpecName: "config") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.784884 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.794920 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.811017 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.811046 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.811182 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.811313 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.811327 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwg4\" (UniqueName: \"kubernetes.io/projected/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-kube-api-access-sjwg4\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.817346 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" (UID: "f002f8eb-0fcd-4130-8d1a-c5849b8f90a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:37:24 crc kubenswrapper[4764]: I0127 07:37:24.921878 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.545310 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.545630 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-hjxvr" event={"ID":"f002f8eb-0fcd-4130-8d1a-c5849b8f90a9","Type":"ContainerDied","Data":"793033cbba137e50095755c57b9e90a4ca20d78aa586663718886e50a4b17892"} Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.545924 4764 scope.go:117] "RemoveContainer" containerID="ca4f7d7ced1d99db483dd3c1cffd228644ad960bcccadc897ce8930f371695ee" Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.551340 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce77f72a-8ed8-4216-b443-a1c5737a50e7","Type":"ContainerStarted","Data":"220f074552c922c20e5ea004a767612438be44754e5c1dd112be59a9df6ce3d7"} Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.584731 4764 scope.go:117] "RemoveContainer" containerID="311f3db0743162e49c68ba9dbab05c354174f3232a8938f5eed99f817b9a2aa6" Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.593402 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:37:25 crc kubenswrapper[4764]: I0127 07:37:25.611126 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-hjxvr"] Jan 27 07:37:26 crc kubenswrapper[4764]: I0127 07:37:26.451764 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" path="/var/lib/kubelet/pods/f002f8eb-0fcd-4130-8d1a-c5849b8f90a9/volumes" Jan 27 07:37:27 crc kubenswrapper[4764]: I0127 07:37:27.571999 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce77f72a-8ed8-4216-b443-a1c5737a50e7","Type":"ContainerStarted","Data":"7a102aeaa5e41f97e0264a481967277edd3c5873d75d746ac6967246806f16fe"} Jan 27 07:37:27 crc kubenswrapper[4764]: I0127 07:37:27.572428 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 07:37:27 crc kubenswrapper[4764]: I0127 07:37:27.601694 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.700621937 podStartE2EDuration="6.601667766s" podCreationTimestamp="2026-01-27 07:37:21 +0000 UTC" firstStartedPulling="2026-01-27 07:37:22.482652114 +0000 UTC m=+1255.078274650" lastFinishedPulling="2026-01-27 07:37:26.383697943 +0000 UTC m=+1258.979320479" observedRunningTime="2026-01-27 07:37:27.599756656 +0000 UTC m=+1260.195379192" watchObservedRunningTime="2026-01-27 07:37:27.601667766 +0000 UTC m=+1260.197290282" Jan 27 07:37:28 crc kubenswrapper[4764]: I0127 07:37:28.582512 4764 generic.go:334] "Generic (PLEG): container finished" podID="6454f8af-d141-4b00-a06b-b5e2af100376" containerID="cc915f0160e9989b3ef76cb8c02558efbd07ba674a8bbda27262e8d45eef3a06" exitCode=0 Jan 27 07:37:28 crc kubenswrapper[4764]: I0127 07:37:28.582584 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g2ngw" event={"ID":"6454f8af-d141-4b00-a06b-b5e2af100376","Type":"ContainerDied","Data":"cc915f0160e9989b3ef76cb8c02558efbd07ba674a8bbda27262e8d45eef3a06"} Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:29.941982 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.136685 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzs56\" (UniqueName: \"kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56\") pod \"6454f8af-d141-4b00-a06b-b5e2af100376\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.136881 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data\") pod \"6454f8af-d141-4b00-a06b-b5e2af100376\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.136906 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts\") pod \"6454f8af-d141-4b00-a06b-b5e2af100376\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.137018 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle\") pod \"6454f8af-d141-4b00-a06b-b5e2af100376\" (UID: \"6454f8af-d141-4b00-a06b-b5e2af100376\") " Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.142779 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56" (OuterVolumeSpecName: "kube-api-access-qzs56") pod "6454f8af-d141-4b00-a06b-b5e2af100376" (UID: "6454f8af-d141-4b00-a06b-b5e2af100376"). InnerVolumeSpecName "kube-api-access-qzs56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.144112 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts" (OuterVolumeSpecName: "scripts") pod "6454f8af-d141-4b00-a06b-b5e2af100376" (UID: "6454f8af-d141-4b00-a06b-b5e2af100376"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.168121 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6454f8af-d141-4b00-a06b-b5e2af100376" (UID: "6454f8af-d141-4b00-a06b-b5e2af100376"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.168728 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data" (OuterVolumeSpecName: "config-data") pod "6454f8af-d141-4b00-a06b-b5e2af100376" (UID: "6454f8af-d141-4b00-a06b-b5e2af100376"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.239271 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.239308 4764 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.239320 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6454f8af-d141-4b00-a06b-b5e2af100376-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.239335 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzs56\" (UniqueName: \"kubernetes.io/projected/6454f8af-d141-4b00-a06b-b5e2af100376-kube-api-access-qzs56\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.599175 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g2ngw" event={"ID":"6454f8af-d141-4b00-a06b-b5e2af100376","Type":"ContainerDied","Data":"d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4"} Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.599434 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d72186ae09f7d826e67d46e2378d3363de0086658964169e4e16d2774f6ee4" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.599504 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g2ngw" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.805947 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.806167 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e53a8a08-3378-4a4c-b20d-d90a64662740" containerName="nova-scheduler-scheduler" containerID="cri-o://ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57" gracePeriod=30 Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.813012 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.814210 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.860691 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.883317 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.888606 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" containerID="cri-o://ba6d07c84d3077635957d851d2c0fe630d0c06d6753ef533e9ec42bda009db9f" gracePeriod=30 Jan 27 07:37:30 crc kubenswrapper[4764]: I0127 07:37:30.888698 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" containerID="cri-o://ae0cd3780fa508a09297b790570ff6b1afe30590d7d8c6cc49de9eb6bdca3b1b" gracePeriod=30 Jan 27 07:37:31 crc kubenswrapper[4764]: I0127 07:37:31.608978 4764 generic.go:334] "Generic (PLEG): container finished" podID="73d9f608-dc01-4940-9ab2-381619a27f31" containerID="ba6d07c84d3077635957d851d2c0fe630d0c06d6753ef533e9ec42bda009db9f" exitCode=143 Jan 27 07:37:31 crc kubenswrapper[4764]: I0127 07:37:31.609162 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerDied","Data":"ba6d07c84d3077635957d851d2c0fe630d0c06d6753ef533e9ec42bda009db9f"} Jan 27 07:37:31 crc kubenswrapper[4764]: I0127 07:37:31.822609 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:31 crc kubenswrapper[4764]: I0127 07:37:31.822624 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.215:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.621035 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.621124 4764 generic.go:334] "Generic (PLEG): container finished" podID="e53a8a08-3378-4a4c-b20d-d90a64662740" containerID="ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57" exitCode=0 Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.621154 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e53a8a08-3378-4a4c-b20d-d90a64662740","Type":"ContainerDied","Data":"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57"} Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.621481 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e53a8a08-3378-4a4c-b20d-d90a64662740","Type":"ContainerDied","Data":"28dcec14af9ef4906e5183d7d145c89a3c347db0c7c13a070c480b7d907a1b69"} Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.621502 4764 scope.go:117] "RemoveContainer" containerID="ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.622365 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-api" containerID="cri-o://2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd" gracePeriod=30 Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.622355 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-log" containerID="cri-o://522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c" gracePeriod=30 Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.656771 4764 scope.go:117] "RemoveContainer" containerID="ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57" Jan 27 07:37:32 crc kubenswrapper[4764]: E0127 07:37:32.671072 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57\": container with ID starting with ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57 not found: ID does not exist" containerID="ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.671135 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57"} err="failed to get container status \"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57\": rpc error: code = NotFound desc = could not find container \"ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57\": container with ID starting with ab90af3dc7ca525a78afd9278e2dd6a97268d0db319938617525efbca7a22f57 not found: ID does not exist" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.791684 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data\") pod \"e53a8a08-3378-4a4c-b20d-d90a64662740\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.791857 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmrwc\" (UniqueName: \"kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc\") pod \"e53a8a08-3378-4a4c-b20d-d90a64662740\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.791986 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle\") pod \"e53a8a08-3378-4a4c-b20d-d90a64662740\" (UID: \"e53a8a08-3378-4a4c-b20d-d90a64662740\") " Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.797308 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc" (OuterVolumeSpecName: "kube-api-access-qmrwc") pod "e53a8a08-3378-4a4c-b20d-d90a64662740" (UID: "e53a8a08-3378-4a4c-b20d-d90a64662740"). InnerVolumeSpecName "kube-api-access-qmrwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.829651 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data" (OuterVolumeSpecName: "config-data") pod "e53a8a08-3378-4a4c-b20d-d90a64662740" (UID: "e53a8a08-3378-4a4c-b20d-d90a64662740"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.838995 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e53a8a08-3378-4a4c-b20d-d90a64662740" (UID: "e53a8a08-3378-4a4c-b20d-d90a64662740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.894165 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.894207 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmrwc\" (UniqueName: \"kubernetes.io/projected/e53a8a08-3378-4a4c-b20d-d90a64662740-kube-api-access-qmrwc\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:32 crc kubenswrapper[4764]: I0127 07:37:32.894217 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53a8a08-3378-4a4c-b20d-d90a64662740-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.632429 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.640516 4764 generic.go:334] "Generic (PLEG): container finished" podID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerID="522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c" exitCode=143 Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.640573 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerDied","Data":"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c"} Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.675106 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.689422 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.700530 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:33 crc kubenswrapper[4764]: E0127 07:37:33.701180 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53a8a08-3378-4a4c-b20d-d90a64662740" containerName="nova-scheduler-scheduler" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701212 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53a8a08-3378-4a4c-b20d-d90a64662740" containerName="nova-scheduler-scheduler" Jan 27 07:37:33 crc kubenswrapper[4764]: E0127 07:37:33.701240 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="dnsmasq-dns" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701252 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="dnsmasq-dns" Jan 27 07:37:33 crc kubenswrapper[4764]: E0127 07:37:33.701279 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="init" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701291 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="init" Jan 27 07:37:33 crc kubenswrapper[4764]: E0127 07:37:33.701327 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6454f8af-d141-4b00-a06b-b5e2af100376" containerName="nova-manage" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701338 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6454f8af-d141-4b00-a06b-b5e2af100376" containerName="nova-manage" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701659 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f002f8eb-0fcd-4130-8d1a-c5849b8f90a9" containerName="dnsmasq-dns" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701698 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6454f8af-d141-4b00-a06b-b5e2af100376" containerName="nova-manage" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.701734 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53a8a08-3378-4a4c-b20d-d90a64662740" containerName="nova-scheduler-scheduler" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.702700 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.705641 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.712185 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.811892 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.811940 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m644\" (UniqueName: \"kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.811973 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.913584 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.913644 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m644\" (UniqueName: \"kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.913671 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.919472 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.920384 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:33 crc kubenswrapper[4764]: I0127 07:37:33.929937 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m644\" (UniqueName: \"kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644\") pod \"nova-scheduler-0\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " pod="openstack/nova-scheduler-0" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.017253 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.126556 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:54694->10.217.0.209:8775: read: connection reset by peer" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.126901 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:54700->10.217.0.209:8775: read: connection reset by peer" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.461811 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53a8a08-3378-4a4c-b20d-d90a64662740" path="/var/lib/kubelet/pods/e53a8a08-3378-4a4c-b20d-d90a64662740/volumes" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.508930 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.659098 4764 generic.go:334] "Generic (PLEG): container finished" podID="73d9f608-dc01-4940-9ab2-381619a27f31" containerID="ae0cd3780fa508a09297b790570ff6b1afe30590d7d8c6cc49de9eb6bdca3b1b" exitCode=0 Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.659240 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerDied","Data":"ae0cd3780fa508a09297b790570ff6b1afe30590d7d8c6cc49de9eb6bdca3b1b"} Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.661973 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b2b306d-f769-4e08-a123-d3847d9056c4","Type":"ContainerStarted","Data":"319f687657fe00b7813eecdf26341a17fa1eee4bced1cf1d3182da21a78293fc"} Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.730757 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.839591 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-565zz\" (UniqueName: \"kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz\") pod \"73d9f608-dc01-4940-9ab2-381619a27f31\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.839671 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs\") pod \"73d9f608-dc01-4940-9ab2-381619a27f31\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.839741 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle\") pod \"73d9f608-dc01-4940-9ab2-381619a27f31\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.839786 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data\") pod \"73d9f608-dc01-4940-9ab2-381619a27f31\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.839818 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs\") pod \"73d9f608-dc01-4940-9ab2-381619a27f31\" (UID: \"73d9f608-dc01-4940-9ab2-381619a27f31\") " Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.840351 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs" (OuterVolumeSpecName: "logs") pod "73d9f608-dc01-4940-9ab2-381619a27f31" (UID: "73d9f608-dc01-4940-9ab2-381619a27f31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.840542 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d9f608-dc01-4940-9ab2-381619a27f31-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.843121 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz" (OuterVolumeSpecName: "kube-api-access-565zz") pod "73d9f608-dc01-4940-9ab2-381619a27f31" (UID: "73d9f608-dc01-4940-9ab2-381619a27f31"). InnerVolumeSpecName "kube-api-access-565zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.863529 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data" (OuterVolumeSpecName: "config-data") pod "73d9f608-dc01-4940-9ab2-381619a27f31" (UID: "73d9f608-dc01-4940-9ab2-381619a27f31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.864917 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73d9f608-dc01-4940-9ab2-381619a27f31" (UID: "73d9f608-dc01-4940-9ab2-381619a27f31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.892753 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "73d9f608-dc01-4940-9ab2-381619a27f31" (UID: "73d9f608-dc01-4940-9ab2-381619a27f31"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.942300 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-565zz\" (UniqueName: \"kubernetes.io/projected/73d9f608-dc01-4940-9ab2-381619a27f31-kube-api-access-565zz\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.942341 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.942354 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:34 crc kubenswrapper[4764]: I0127 07:37:34.942369 4764 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d9f608-dc01-4940-9ab2-381619a27f31-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.680125 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73d9f608-dc01-4940-9ab2-381619a27f31","Type":"ContainerDied","Data":"ef3b51069dcc78f4321fc9ae47543257a55db867b36864a2cc918703c224b7e3"} Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.680192 4764 scope.go:117] "RemoveContainer" containerID="ae0cd3780fa508a09297b790570ff6b1afe30590d7d8c6cc49de9eb6bdca3b1b" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.680368 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.682237 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b2b306d-f769-4e08-a123-d3847d9056c4","Type":"ContainerStarted","Data":"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545"} Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.711115 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.711088761 podStartE2EDuration="2.711088761s" podCreationTimestamp="2026-01-27 07:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:35.704431437 +0000 UTC m=+1268.300053963" watchObservedRunningTime="2026-01-27 07:37:35.711088761 +0000 UTC m=+1268.306711297" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.736026 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.743709 4764 scope.go:117] "RemoveContainer" containerID="ba6d07c84d3077635957d851d2c0fe630d0c06d6753ef533e9ec42bda009db9f" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.748835 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.761316 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:35 crc kubenswrapper[4764]: E0127 07:37:35.762913 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.762936 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" Jan 27 07:37:35 crc kubenswrapper[4764]: E0127 07:37:35.763009 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.763019 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.763228 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-metadata" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.763251 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" containerName="nova-metadata-log" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.767678 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.773070 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.777102 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.786958 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.857521 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.857571 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.857754 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnfbs\" (UniqueName: \"kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.857793 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.857812 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.959579 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnfbs\" (UniqueName: \"kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.959646 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.959675 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.959746 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.959773 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.960147 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.965073 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.965688 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.964725 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:35 crc kubenswrapper[4764]: I0127 07:37:35.975041 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnfbs\" (UniqueName: \"kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs\") pod \"nova-metadata-0\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " pod="openstack/nova-metadata-0" Jan 27 07:37:36 crc kubenswrapper[4764]: I0127 07:37:36.100864 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:37:36 crc kubenswrapper[4764]: I0127 07:37:36.448525 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d9f608-dc01-4940-9ab2-381619a27f31" path="/var/lib/kubelet/pods/73d9f608-dc01-4940-9ab2-381619a27f31/volumes" Jan 27 07:37:36 crc kubenswrapper[4764]: I0127 07:37:36.588760 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:37:36 crc kubenswrapper[4764]: W0127 07:37:36.599080 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dd080f7_3763_4fc2_abd2_7214f8ae7b73.slice/crio-99c67da046c277e4069c6254b6ca71e71af853deea79c1a9c0daae5815c475c4 WatchSource:0}: Error finding container 99c67da046c277e4069c6254b6ca71e71af853deea79c1a9c0daae5815c475c4: Status 404 returned error can't find the container with id 99c67da046c277e4069c6254b6ca71e71af853deea79c1a9c0daae5815c475c4 Jan 27 07:37:36 crc kubenswrapper[4764]: I0127 07:37:36.694859 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerStarted","Data":"99c67da046c277e4069c6254b6ca71e71af853deea79c1a9c0daae5815c475c4"} Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.448834 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.597954 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.598018 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.598085 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.598159 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.598222 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6q7\" (UniqueName: \"kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.598250 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs\") pod \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\" (UID: \"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d\") " Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.599742 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs" (OuterVolumeSpecName: "logs") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.605659 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7" (OuterVolumeSpecName: "kube-api-access-xq6q7") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "kube-api-access-xq6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.628019 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.630227 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data" (OuterVolumeSpecName: "config-data") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.659561 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.662341 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" (UID: "e349fec7-e3a8-4d4c-a684-ab81fdc5b70d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700126 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700162 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700173 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700186 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq6q7\" (UniqueName: \"kubernetes.io/projected/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-kube-api-access-xq6q7\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700200 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.700210 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.705907 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerStarted","Data":"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e"} Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.705968 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerStarted","Data":"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03"} Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.711411 4764 generic.go:334] "Generic (PLEG): container finished" podID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerID="2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd" exitCode=0 Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.711507 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerDied","Data":"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd"} Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.711558 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e349fec7-e3a8-4d4c-a684-ab81fdc5b70d","Type":"ContainerDied","Data":"a2136b46e4dd4b5762a5ec032ec1fb7f7dd16918989efe3bca293dbe2526799c"} Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.711587 4764 scope.go:117] "RemoveContainer" containerID="2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.711800 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.772531 4764 scope.go:117] "RemoveContainer" containerID="522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.777063 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.777049472 podStartE2EDuration="2.777049472s" podCreationTimestamp="2026-01-27 07:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:37.733306847 +0000 UTC m=+1270.328929373" watchObservedRunningTime="2026-01-27 07:37:37.777049472 +0000 UTC m=+1270.372671998" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.791261 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.796946 4764 scope.go:117] "RemoveContainer" containerID="2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd" Jan 27 07:37:37 crc kubenswrapper[4764]: E0127 07:37:37.797618 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd\": container with ID starting with 2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd not found: ID does not exist" containerID="2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.797657 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd"} err="failed to get container status \"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd\": rpc error: code = NotFound desc = could not find container \"2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd\": container with ID starting with 2a70732e80aa64c5b8bb269ac0b11f7e6a533d28195c324304927ad0cac5fbdd not found: ID does not exist" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.797678 4764 scope.go:117] "RemoveContainer" containerID="522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c" Jan 27 07:37:37 crc kubenswrapper[4764]: E0127 07:37:37.797934 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c\": container with ID starting with 522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c not found: ID does not exist" containerID="522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.797963 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c"} err="failed to get container status \"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c\": rpc error: code = NotFound desc = could not find container \"522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c\": container with ID starting with 522ea712be70df1bc270e0ab7429eb865c451b4524a2cd2ef0a1f24302c45c2c not found: ID does not exist" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.808800 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.817387 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:37 crc kubenswrapper[4764]: E0127 07:37:37.817817 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-api" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.817833 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-api" Jan 27 07:37:37 crc kubenswrapper[4764]: E0127 07:37:37.817850 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-log" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.817858 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-log" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.818028 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-api" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.818055 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" containerName="nova-api-log" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.819022 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.820918 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.822425 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.822706 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 07:37:37 crc kubenswrapper[4764]: I0127 07:37:37.826618 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.005253 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.005610 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bjl\" (UniqueName: \"kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.005687 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.005726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.005754 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.006045 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107649 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107697 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107791 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107828 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107912 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bjl\" (UniqueName: \"kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.107967 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.108789 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.113341 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.113714 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.113860 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.115842 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.137924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bjl\" (UniqueName: \"kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl\") pod \"nova-api-0\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.434748 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.456824 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e349fec7-e3a8-4d4c-a684-ab81fdc5b70d" path="/var/lib/kubelet/pods/e349fec7-e3a8-4d4c-a684-ab81fdc5b70d/volumes" Jan 27 07:37:38 crc kubenswrapper[4764]: I0127 07:37:38.872584 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:37:39 crc kubenswrapper[4764]: I0127 07:37:39.018460 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 07:37:39 crc kubenswrapper[4764]: I0127 07:37:39.733884 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerStarted","Data":"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8"} Jan 27 07:37:39 crc kubenswrapper[4764]: I0127 07:37:39.734216 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerStarted","Data":"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1"} Jan 27 07:37:39 crc kubenswrapper[4764]: I0127 07:37:39.734229 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerStarted","Data":"e74a36a59e4121853ec04725ed1dcd32b1152f4f6234410d23ca4d534a8e3a87"} Jan 27 07:37:39 crc kubenswrapper[4764]: I0127 07:37:39.752358 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.752336251 podStartE2EDuration="2.752336251s" podCreationTimestamp="2026-01-27 07:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:37:39.751972951 +0000 UTC m=+1272.347595477" watchObservedRunningTime="2026-01-27 07:37:39.752336251 +0000 UTC m=+1272.347958777" Jan 27 07:37:41 crc kubenswrapper[4764]: I0127 07:37:41.101357 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:37:41 crc kubenswrapper[4764]: I0127 07:37:41.101426 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:37:44 crc kubenswrapper[4764]: I0127 07:37:44.018382 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 07:37:44 crc kubenswrapper[4764]: I0127 07:37:44.051570 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 07:37:44 crc kubenswrapper[4764]: I0127 07:37:44.814163 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 07:37:46 crc kubenswrapper[4764]: I0127 07:37:46.101537 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:37:46 crc kubenswrapper[4764]: I0127 07:37:46.101635 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:37:47 crc kubenswrapper[4764]: I0127 07:37:47.111808 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:47 crc kubenswrapper[4764]: I0127 07:37:47.111818 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:48 crc kubenswrapper[4764]: I0127 07:37:48.434907 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:48 crc kubenswrapper[4764]: I0127 07:37:48.435256 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:37:49 crc kubenswrapper[4764]: I0127 07:37:49.447612 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:49 crc kubenswrapper[4764]: I0127 07:37:49.447726 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:37:51 crc kubenswrapper[4764]: I0127 07:37:51.860088 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 07:37:56 crc kubenswrapper[4764]: I0127 07:37:56.109033 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:37:56 crc kubenswrapper[4764]: I0127 07:37:56.111151 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:37:56 crc kubenswrapper[4764]: I0127 07:37:56.122005 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:37:56 crc kubenswrapper[4764]: I0127 07:37:56.905203 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.451557 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.452392 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.460106 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.463906 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.917192 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:37:58 crc kubenswrapper[4764]: I0127 07:37:58.927551 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.744214 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.744421 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5" gracePeriod=30 Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.810780 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.811065 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerName="nova-scheduler-scheduler" containerID="cri-o://34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" gracePeriod=30 Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.823714 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.824024 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5" gracePeriod=30 Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.834203 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.841774 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.842008 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" containerID="cri-o://2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03" gracePeriod=30 Jan 27 07:38:00 crc kubenswrapper[4764]: I0127 07:38:00.842116 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" containerID="cri-o://acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e" gracePeriod=30 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.604526 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.773887 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle\") pod \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.773981 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data\") pod \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.774054 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92kn\" (UniqueName: \"kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn\") pod \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.774124 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs\") pod \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.774146 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs\") pod \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\" (UID: \"85733ad9-3eda-48d5-abf5-2ffe5e2202e3\") " Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.781088 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn" (OuterVolumeSpecName: "kube-api-access-n92kn") pod "85733ad9-3eda-48d5-abf5-2ffe5e2202e3" (UID: "85733ad9-3eda-48d5-abf5-2ffe5e2202e3"). InnerVolumeSpecName "kube-api-access-n92kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.806580 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85733ad9-3eda-48d5-abf5-2ffe5e2202e3" (UID: "85733ad9-3eda-48d5-abf5-2ffe5e2202e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.811706 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data" (OuterVolumeSpecName: "config-data") pod "85733ad9-3eda-48d5-abf5-2ffe5e2202e3" (UID: "85733ad9-3eda-48d5-abf5-2ffe5e2202e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.849213 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "85733ad9-3eda-48d5-abf5-2ffe5e2202e3" (UID: "85733ad9-3eda-48d5-abf5-2ffe5e2202e3"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.868219 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "85733ad9-3eda-48d5-abf5-2ffe5e2202e3" (UID: "85733ad9-3eda-48d5-abf5-2ffe5e2202e3"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.876089 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.876304 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.876377 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92kn\" (UniqueName: \"kubernetes.io/projected/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-kube-api-access-n92kn\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.876545 4764 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.876619 4764 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85733ad9-3eda-48d5-abf5-2ffe5e2202e3-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.913261 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.913725 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerName="nova-cell1-conductor-conductor" containerID="cri-o://74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" gracePeriod=30 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.947059 4764 generic.go:334] "Generic (PLEG): container finished" podID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" containerID="fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5" exitCode=0 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.947129 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85733ad9-3eda-48d5-abf5-2ffe5e2202e3","Type":"ContainerDied","Data":"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5"} Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.947162 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85733ad9-3eda-48d5-abf5-2ffe5e2202e3","Type":"ContainerDied","Data":"8cb6c65d01706c744c8c9a25d81cd827c4b3b349eee3de7feeeda97aa09a99a9"} Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.947179 4764 scope.go:117] "RemoveContainer" containerID="fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.947376 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.948815 4764 generic.go:334] "Generic (PLEG): container finished" podID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerID="2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03" exitCode=143 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.948981 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-log" containerID="cri-o://2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1" gracePeriod=30 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.949070 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-api" containerID="cri-o://b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8" gracePeriod=30 Jan 27 07:38:01 crc kubenswrapper[4764]: I0127 07:38:01.949100 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerDied","Data":"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03"} Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.032859 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.047607 4764 scope.go:117] "RemoveContainer" containerID="fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5" Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.053012 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5\": container with ID starting with fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5 not found: ID does not exist" containerID="fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.053070 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5"} err="failed to get container status \"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5\": rpc error: code = NotFound desc = could not find container \"fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5\": container with ID starting with fc4ca336533192a54d03bdbe8b73d49508fb46a050e5eec0954de95912ac39e5 not found: ID does not exist" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.066514 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.066763 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.067146 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.067158 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.067316 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.069119 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.073644 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.073891 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.073993 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.098974 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.184614 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.184705 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.184773 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kt4b\" (UniqueName: \"kubernetes.io/projected/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-kube-api-access-5kt4b\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.184855 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.184890 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.246648 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.250532 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.258531 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 07:38:02 crc kubenswrapper[4764]: E0127 07:38:02.258590 4764 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerName="nova-cell1-conductor-conductor" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.286171 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kt4b\" (UniqueName: \"kubernetes.io/projected/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-kube-api-access-5kt4b\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.286258 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.286282 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.286331 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.286370 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.292920 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.307110 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.307155 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.307252 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.321886 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kt4b\" (UniqueName: \"kubernetes.io/projected/7d674cb9-7f4b-4557-a739-dc3c4aba7bdb-kube-api-access-5kt4b\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.428975 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.452842 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85733ad9-3eda-48d5-abf5-2ffe5e2202e3" path="/var/lib/kubelet/pods/85733ad9-3eda-48d5-abf5-2ffe5e2202e3/volumes" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.722386 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.797277 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75scq\" (UniqueName: \"kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq\") pod \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.797409 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle\") pod \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.797495 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data\") pod \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\" (UID: \"32c7f10b-0a05-432c-9c2e-b53bcc358a0f\") " Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.803011 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq" (OuterVolumeSpecName: "kube-api-access-75scq") pod "32c7f10b-0a05-432c-9c2e-b53bcc358a0f" (UID: "32c7f10b-0a05-432c-9c2e-b53bcc358a0f"). InnerVolumeSpecName "kube-api-access-75scq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.824157 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data" (OuterVolumeSpecName: "config-data") pod "32c7f10b-0a05-432c-9c2e-b53bcc358a0f" (UID: "32c7f10b-0a05-432c-9c2e-b53bcc358a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.832758 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32c7f10b-0a05-432c-9c2e-b53bcc358a0f" (UID: "32c7f10b-0a05-432c-9c2e-b53bcc358a0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.899501 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75scq\" (UniqueName: \"kubernetes.io/projected/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-kube-api-access-75scq\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.899534 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.899543 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c7f10b-0a05-432c-9c2e-b53bcc358a0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.949685 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.958838 4764 generic.go:334] "Generic (PLEG): container finished" podID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerID="2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1" exitCode=143 Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.958920 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerDied","Data":"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1"} Jan 27 07:38:02 crc kubenswrapper[4764]: W0127 07:38:02.959010 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d674cb9_7f4b_4557_a739_dc3c4aba7bdb.slice/crio-565fe9507590a2b90093d687a279fdc7d34401a962aa1b00625bc302e728d06d WatchSource:0}: Error finding container 565fe9507590a2b90093d687a279fdc7d34401a962aa1b00625bc302e728d06d: Status 404 returned error can't find the container with id 565fe9507590a2b90093d687a279fdc7d34401a962aa1b00625bc302e728d06d Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.961125 4764 generic.go:334] "Generic (PLEG): container finished" podID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" containerID="4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5" exitCode=0 Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.961197 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"32c7f10b-0a05-432c-9c2e-b53bcc358a0f","Type":"ContainerDied","Data":"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5"} Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.961223 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"32c7f10b-0a05-432c-9c2e-b53bcc358a0f","Type":"ContainerDied","Data":"19c7bc2d1aa157cc4a00dc4c3f9dce82f1215cc31637243eb15cab494a2992d6"} Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.961239 4764 scope.go:117] "RemoveContainer" containerID="4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5" Jan 27 07:38:02 crc kubenswrapper[4764]: I0127 07:38:02.961311 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.114491 4764 scope.go:117] "RemoveContainer" containerID="4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5" Jan 27 07:38:03 crc kubenswrapper[4764]: E0127 07:38:03.115225 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5\": container with ID starting with 4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5 not found: ID does not exist" containerID="4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.115267 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5"} err="failed to get container status \"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5\": rpc error: code = NotFound desc = could not find container \"4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5\": container with ID starting with 4c2ecc31efc218349ce6da108c563f7ecd45b79e7097a524d92a515b7e7c9ab5 not found: ID does not exist" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.147691 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.157527 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.170547 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:03 crc kubenswrapper[4764]: E0127 07:38:03.171049 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" containerName="nova-cell0-conductor-conductor" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.171073 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" containerName="nova-cell0-conductor-conductor" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.171295 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" containerName="nova-cell0-conductor-conductor" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.172096 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.175248 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.179285 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.307681 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.307784 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.307835 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwp5\" (UniqueName: \"kubernetes.io/projected/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-kube-api-access-hfwp5\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.409645 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.409761 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfwp5\" (UniqueName: \"kubernetes.io/projected/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-kube-api-access-hfwp5\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.409992 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.417180 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.427337 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.430095 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfwp5\" (UniqueName: \"kubernetes.io/projected/b9a73b99-e667-4d9e-81ca-587d8c3a73b6-kube-api-access-hfwp5\") pod \"nova-cell0-conductor-0\" (UID: \"b9a73b99-e667-4d9e-81ca-587d8c3a73b6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.492537 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.986965 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb","Type":"ContainerStarted","Data":"f7ea59b79f25e87283798297b4d27dc3e1099ed52417af8c75dc9d109e89b483"} Jan 27 07:38:03 crc kubenswrapper[4764]: I0127 07:38:03.987004 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d674cb9-7f4b-4557-a739-dc3c4aba7bdb","Type":"ContainerStarted","Data":"565fe9507590a2b90093d687a279fdc7d34401a962aa1b00625bc302e728d06d"} Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.000936 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 07:38:04 crc kubenswrapper[4764]: E0127 07:38:04.038567 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:38:04 crc kubenswrapper[4764]: E0127 07:38:04.040653 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:38:04 crc kubenswrapper[4764]: E0127 07:38:04.042570 4764 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 07:38:04 crc kubenswrapper[4764]: E0127 07:38:04.042613 4764 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerName="nova-scheduler-scheduler" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.058573 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": read tcp 10.217.0.2:49592->10.217.0.219:8775: read: connection reset by peer" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.058711 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": read tcp 10.217.0.2:49580->10.217.0.219:8775: read: connection reset by peer" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.070034 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.070004999 podStartE2EDuration="2.070004999s" podCreationTimestamp="2026-01-27 07:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:04.015686745 +0000 UTC m=+1296.611309271" watchObservedRunningTime="2026-01-27 07:38:04.070004999 +0000 UTC m=+1296.665627535" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.444756 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.468931 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c7f10b-0a05-432c-9c2e-b53bcc358a0f" path="/var/lib/kubelet/pods/32c7f10b-0a05-432c-9c2e-b53bcc358a0f/volumes" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.555221 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle\") pod \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.555584 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data\") pod \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.555627 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs\") pod \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.555650 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs\") pod \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.555793 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnfbs\" (UniqueName: \"kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs\") pod \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\" (UID: \"4dd080f7-3763-4fc2-abd2-7214f8ae7b73\") " Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.558056 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs" (OuterVolumeSpecName: "logs") pod "4dd080f7-3763-4fc2-abd2-7214f8ae7b73" (UID: "4dd080f7-3763-4fc2-abd2-7214f8ae7b73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.562658 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs" (OuterVolumeSpecName: "kube-api-access-bnfbs") pod "4dd080f7-3763-4fc2-abd2-7214f8ae7b73" (UID: "4dd080f7-3763-4fc2-abd2-7214f8ae7b73"). InnerVolumeSpecName "kube-api-access-bnfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.586361 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data" (OuterVolumeSpecName: "config-data") pod "4dd080f7-3763-4fc2-abd2-7214f8ae7b73" (UID: "4dd080f7-3763-4fc2-abd2-7214f8ae7b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.596753 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd080f7-3763-4fc2-abd2-7214f8ae7b73" (UID: "4dd080f7-3763-4fc2-abd2-7214f8ae7b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.657778 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnfbs\" (UniqueName: \"kubernetes.io/projected/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-kube-api-access-bnfbs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.657813 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.657824 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.657837 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.660418 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4dd080f7-3763-4fc2-abd2-7214f8ae7b73" (UID: "4dd080f7-3763-4fc2-abd2-7214f8ae7b73"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:04 crc kubenswrapper[4764]: I0127 07:38:04.759325 4764 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd080f7-3763-4fc2-abd2-7214f8ae7b73-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.025173 4764 generic.go:334] "Generic (PLEG): container finished" podID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerID="acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e" exitCode=0 Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.025248 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerDied","Data":"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e"} Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.025281 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dd080f7-3763-4fc2-abd2-7214f8ae7b73","Type":"ContainerDied","Data":"99c67da046c277e4069c6254b6ca71e71af853deea79c1a9c0daae5815c475c4"} Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.025301 4764 scope.go:117] "RemoveContainer" containerID="acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.025468 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.040183 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9a73b99-e667-4d9e-81ca-587d8c3a73b6","Type":"ContainerStarted","Data":"b341e4f3d137161aa4b54fd003175c28448a7a334c25e6feb81ab0fa1748c83b"} Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.045551 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9a73b99-e667-4d9e-81ca-587d8c3a73b6","Type":"ContainerStarted","Data":"fbe80de31133fec11f11d648b415be1657e80c5302f4f4af300ba3414bcbaf4a"} Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.045589 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.078612 4764 scope.go:117] "RemoveContainer" containerID="2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.085049 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.085029045 podStartE2EDuration="2.085029045s" podCreationTimestamp="2026-01-27 07:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:05.066049593 +0000 UTC m=+1297.661672119" watchObservedRunningTime="2026-01-27 07:38:05.085029045 +0000 UTC m=+1297.680651571" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.088985 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.108555 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.115169 4764 scope.go:117] "RemoveContainer" containerID="acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e" Jan 27 07:38:05 crc kubenswrapper[4764]: E0127 07:38:05.115726 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e\": container with ID starting with acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e not found: ID does not exist" containerID="acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.115778 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e"} err="failed to get container status \"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e\": rpc error: code = NotFound desc = could not find container \"acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e\": container with ID starting with acd60b1bec92b506aa59a72cb95911bd31a2ee621be741dd1201d907e8b2e85e not found: ID does not exist" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.115812 4764 scope.go:117] "RemoveContainer" containerID="2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03" Jan 27 07:38:05 crc kubenswrapper[4764]: E0127 07:38:05.116210 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03\": container with ID starting with 2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03 not found: ID does not exist" containerID="2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.116250 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03"} err="failed to get container status \"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03\": rpc error: code = NotFound desc = could not find container \"2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03\": container with ID starting with 2572ca578c36c58f14cbc13339b941da800cc2a43dbd13c0f652ecfb19c87c03 not found: ID does not exist" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.122776 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:05 crc kubenswrapper[4764]: E0127 07:38:05.123224 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.123246 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" Jan 27 07:38:05 crc kubenswrapper[4764]: E0127 07:38:05.123259 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.123265 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.123474 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-log" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.123493 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" containerName="nova-metadata-metadata" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.124423 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.133251 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.133495 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.133918 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.271050 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.271311 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzxn\" (UniqueName: \"kubernetes.io/projected/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-kube-api-access-mpzxn\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.271415 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.271478 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-logs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.271575 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-config-data\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.373285 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-config-data\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.373369 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.373433 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzxn\" (UniqueName: \"kubernetes.io/projected/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-kube-api-access-mpzxn\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.373489 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.373517 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-logs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.374034 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-logs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.380901 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.389108 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.391072 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-config-data\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.408069 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzxn\" (UniqueName: \"kubernetes.io/projected/ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd-kube-api-access-mpzxn\") pod \"nova-metadata-0\" (UID: \"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd\") " pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.449212 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 07:38:05 crc kubenswrapper[4764]: I0127 07:38:05.933151 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 07:38:05 crc kubenswrapper[4764]: W0127 07:38:05.933692 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffba56e0_1785_4bd3_8ed4_a2fdc0cdcfdd.slice/crio-40c16175353d1353e3dd3ab8e6f50ca7be79ab001d3c4eb47346cd42cacbbfaa WatchSource:0}: Error finding container 40c16175353d1353e3dd3ab8e6f50ca7be79ab001d3c4eb47346cd42cacbbfaa: Status 404 returned error can't find the container with id 40c16175353d1353e3dd3ab8e6f50ca7be79ab001d3c4eb47346cd42cacbbfaa Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.047773 4764 generic.go:334] "Generic (PLEG): container finished" podID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerID="74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" exitCode=0 Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.047844 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c0fe3ecb-bc7e-44ac-bd01-bd775782552f","Type":"ContainerDied","Data":"74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39"} Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.049596 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd","Type":"ContainerStarted","Data":"40c16175353d1353e3dd3ab8e6f50ca7be79ab001d3c4eb47346cd42cacbbfaa"} Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.335948 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.454016 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd080f7-3763-4fc2-abd2-7214f8ae7b73" path="/var/lib/kubelet/pods/4dd080f7-3763-4fc2-abd2-7214f8ae7b73/volumes" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.495108 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpjz9\" (UniqueName: \"kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9\") pod \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.495168 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data\") pod \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.495387 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle\") pod \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\" (UID: \"c0fe3ecb-bc7e-44ac-bd01-bd775782552f\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.500025 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9" (OuterVolumeSpecName: "kube-api-access-vpjz9") pod "c0fe3ecb-bc7e-44ac-bd01-bd775782552f" (UID: "c0fe3ecb-bc7e-44ac-bd01-bd775782552f"). InnerVolumeSpecName "kube-api-access-vpjz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.545923 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.574922 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0fe3ecb-bc7e-44ac-bd01-bd775782552f" (UID: "c0fe3ecb-bc7e-44ac-bd01-bd775782552f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.579688 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data" (OuterVolumeSpecName: "config-data") pod "c0fe3ecb-bc7e-44ac-bd01-bd775782552f" (UID: "c0fe3ecb-bc7e-44ac-bd01-bd775782552f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.598716 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpjz9\" (UniqueName: \"kubernetes.io/projected/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-kube-api-access-vpjz9\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.598751 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.598763 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe3ecb-bc7e-44ac-bd01-bd775782552f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.699367 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle\") pod \"8b2b306d-f769-4e08-a123-d3847d9056c4\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.699424 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m644\" (UniqueName: \"kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644\") pod \"8b2b306d-f769-4e08-a123-d3847d9056c4\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.699543 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data\") pod \"8b2b306d-f769-4e08-a123-d3847d9056c4\" (UID: \"8b2b306d-f769-4e08-a123-d3847d9056c4\") " Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.706747 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644" (OuterVolumeSpecName: "kube-api-access-6m644") pod "8b2b306d-f769-4e08-a123-d3847d9056c4" (UID: "8b2b306d-f769-4e08-a123-d3847d9056c4"). InnerVolumeSpecName "kube-api-access-6m644". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.731663 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b2b306d-f769-4e08-a123-d3847d9056c4" (UID: "8b2b306d-f769-4e08-a123-d3847d9056c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.750977 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data" (OuterVolumeSpecName: "config-data") pod "8b2b306d-f769-4e08-a123-d3847d9056c4" (UID: "8b2b306d-f769-4e08-a123-d3847d9056c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.812566 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.812620 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m644\" (UniqueName: \"kubernetes.io/projected/8b2b306d-f769-4e08-a123-d3847d9056c4-kube-api-access-6m644\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.812639 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2b306d-f769-4e08-a123-d3847d9056c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:06 crc kubenswrapper[4764]: I0127 07:38:06.850627 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.018847 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46bjl\" (UniqueName: \"kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.019283 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.019388 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.019479 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.019513 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.019552 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.020763 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs" (OuterVolumeSpecName: "logs") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.024821 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl" (OuterVolumeSpecName: "kube-api-access-46bjl") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "kube-api-access-46bjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.053137 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.072841 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd","Type":"ContainerStarted","Data":"9148eea748d6059a68a7b464f1e041541793206d2b7f37b47663c85990fe2f0e"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.073036 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd","Type":"ContainerStarted","Data":"d13dd04d417b6aaad654569cbe3c9960e731942c21614568ed2bb66036a0ce6a"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.074948 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c0fe3ecb-bc7e-44ac-bd01-bd775782552f","Type":"ContainerDied","Data":"2279f745a918667876b905c658b2c4e9ea5fa68a494c48782eb71c820555f87e"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.074983 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.075007 4764 scope.go:117] "RemoveContainer" containerID="74879e31f4c92ac4a6caa3dda439b8ed34367915a4cf0f05db0e4b210ac59e39" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.075382 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data" (OuterVolumeSpecName: "config-data") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.082694 4764 generic.go:334] "Generic (PLEG): container finished" podID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerID="b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8" exitCode=0 Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.082735 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.082772 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerDied","Data":"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.082800 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cab831d7-de1f-4293-9ba8-ed1c29a9365a","Type":"ContainerDied","Data":"e74a36a59e4121853ec04725ed1dcd32b1152f4f6234410d23ca4d534a8e3a87"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.084675 4764 generic.go:334] "Generic (PLEG): container finished" podID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" exitCode=0 Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.084713 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b2b306d-f769-4e08-a123-d3847d9056c4","Type":"ContainerDied","Data":"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.084733 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8b2b306d-f769-4e08-a123-d3847d9056c4","Type":"ContainerDied","Data":"319f687657fe00b7813eecdf26341a17fa1eee4bced1cf1d3182da21a78293fc"} Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.084795 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.095368 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.098940 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.098916051 podStartE2EDuration="2.098916051s" podCreationTimestamp="2026-01-27 07:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:07.090297689 +0000 UTC m=+1299.685920215" watchObservedRunningTime="2026-01-27 07:38:07.098916051 +0000 UTC m=+1299.694538567" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.121851 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122053 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") pod \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\" (UID: \"cab831d7-de1f-4293-9ba8-ed1c29a9365a\") " Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122585 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122606 4764 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122615 4764 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab831d7-de1f-4293-9ba8-ed1c29a9365a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122626 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46bjl\" (UniqueName: \"kubernetes.io/projected/cab831d7-de1f-4293-9ba8-ed1c29a9365a-kube-api-access-46bjl\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122636 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: W0127 07:38:07.122700 4764 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cab831d7-de1f-4293-9ba8-ed1c29a9365a/volumes/kubernetes.io~secret/internal-tls-certs Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.122707 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cab831d7-de1f-4293-9ba8-ed1c29a9365a" (UID: "cab831d7-de1f-4293-9ba8-ed1c29a9365a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.193072 4764 scope.go:117] "RemoveContainer" containerID="b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.203917 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.215808 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.223344 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.224857 4764 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab831d7-de1f-4293-9ba8-ed1c29a9365a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.231595 4764 scope.go:117] "RemoveContainer" containerID="2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.244948 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.254827 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.255320 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-api" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255341 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-api" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.255372 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-log" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255380 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-log" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.255394 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerName="nova-cell1-conductor-conductor" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255402 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerName="nova-cell1-conductor-conductor" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.255413 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerName="nova-scheduler-scheduler" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255420 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerName="nova-scheduler-scheduler" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255627 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" containerName="nova-cell1-conductor-conductor" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255645 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" containerName="nova-scheduler-scheduler" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255666 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-api" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.255679 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" containerName="nova-api-log" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.256430 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.260695 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.265834 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.267890 4764 scope.go:117] "RemoveContainer" containerID="b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.269365 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8\": container with ID starting with b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8 not found: ID does not exist" containerID="b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.269413 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8"} err="failed to get container status \"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8\": rpc error: code = NotFound desc = could not find container \"b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8\": container with ID starting with b8a221ea08bf291e4b01077a88f82f4cf3480f8e2ae24f1e8f480deb6efaccd8 not found: ID does not exist" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.269453 4764 scope.go:117] "RemoveContainer" containerID="2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.269853 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1\": container with ID starting with 2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1 not found: ID does not exist" containerID="2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.269979 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1"} err="failed to get container status \"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1\": rpc error: code = NotFound desc = could not find container \"2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1\": container with ID starting with 2805953689e8df92ddfceaac45b8b12309daf369f4cd544008ae98c0bcafd1b1 not found: ID does not exist" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.270074 4764 scope.go:117] "RemoveContainer" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.275931 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.278056 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.282274 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.290749 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.307285 4764 scope.go:117] "RemoveContainer" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" Jan 27 07:38:07 crc kubenswrapper[4764]: E0127 07:38:07.307787 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545\": container with ID starting with 34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545 not found: ID does not exist" containerID="34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.307877 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545"} err="failed to get container status \"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545\": rpc error: code = NotFound desc = could not find container \"34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545\": container with ID starting with 34740e015bf47186c5130eacf99c0379067e0ca7c1dd828ea213c824cc089545 not found: ID does not exist" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.417038 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.427578 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.427628 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.428069 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h98q\" (UniqueName: \"kubernetes.io/projected/887d1985-52c3-476c-bd70-e82a17852b13-kube-api-access-9h98q\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.428278 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.428400 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.428556 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh7sj\" (UniqueName: \"kubernetes.io/projected/a9756bce-266e-4397-875e-8be9c3f383f9-kube-api-access-nh7sj\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.428714 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-config-data\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.430110 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.438268 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.439630 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.444856 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.444989 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.446128 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.455168 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.531869 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.531935 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532053 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d920a5-7d67-483d-9150-fd6a434a3def-logs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532079 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532136 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h98q\" (UniqueName: \"kubernetes.io/projected/887d1985-52c3-476c-bd70-e82a17852b13-kube-api-access-9h98q\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532163 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-config-data\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532219 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532331 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532398 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7gl8\" (UniqueName: \"kubernetes.io/projected/d3d920a5-7d67-483d-9150-fd6a434a3def-kube-api-access-z7gl8\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532455 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532509 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh7sj\" (UniqueName: \"kubernetes.io/projected/a9756bce-266e-4397-875e-8be9c3f383f9-kube-api-access-nh7sj\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.532617 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-config-data\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.537923 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-config-data\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.538174 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887d1985-52c3-476c-bd70-e82a17852b13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.539649 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.552934 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9756bce-266e-4397-875e-8be9c3f383f9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.556337 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh7sj\" (UniqueName: \"kubernetes.io/projected/a9756bce-266e-4397-875e-8be9c3f383f9-kube-api-access-nh7sj\") pod \"nova-cell1-conductor-0\" (UID: \"a9756bce-266e-4397-875e-8be9c3f383f9\") " pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.556943 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h98q\" (UniqueName: \"kubernetes.io/projected/887d1985-52c3-476c-bd70-e82a17852b13-kube-api-access-9h98q\") pod \"nova-scheduler-0\" (UID: \"887d1985-52c3-476c-bd70-e82a17852b13\") " pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.577052 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.599364 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634233 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634721 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634748 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d920a5-7d67-483d-9150-fd6a434a3def-logs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634794 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-config-data\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634827 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.634919 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7gl8\" (UniqueName: \"kubernetes.io/projected/d3d920a5-7d67-483d-9150-fd6a434a3def-kube-api-access-z7gl8\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.635502 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3d920a5-7d67-483d-9150-fd6a434a3def-logs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.639985 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.639985 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.640420 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.643530 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d920a5-7d67-483d-9150-fd6a434a3def-config-data\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.655167 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7gl8\" (UniqueName: \"kubernetes.io/projected/d3d920a5-7d67-483d-9150-fd6a434a3def-kube-api-access-z7gl8\") pod \"nova-api-0\" (UID: \"d3d920a5-7d67-483d-9150-fd6a434a3def\") " pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.803073 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 07:38:07 crc kubenswrapper[4764]: I0127 07:38:07.896603 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.043739 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.093835 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887d1985-52c3-476c-bd70-e82a17852b13","Type":"ContainerStarted","Data":"9bc242141f2a0812d6b9ed147bd85d450fa7d7a87aa5428ab96d3e14c2f7e559"} Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.095156 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a9756bce-266e-4397-875e-8be9c3f383f9","Type":"ContainerStarted","Data":"3d0d33a851c70e5d562d3030428fb0ffc813250ed0d570b477bc266184fce32d"} Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.285252 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 07:38:08 crc kubenswrapper[4764]: W0127 07:38:08.293333 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d920a5_7d67_483d_9150_fd6a434a3def.slice/crio-9113404e0fce32b6e58261b625eeaa1413f802c9c66a63308919f1bdff7e314c WatchSource:0}: Error finding container 9113404e0fce32b6e58261b625eeaa1413f802c9c66a63308919f1bdff7e314c: Status 404 returned error can't find the container with id 9113404e0fce32b6e58261b625eeaa1413f802c9c66a63308919f1bdff7e314c Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.476191 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2b306d-f769-4e08-a123-d3847d9056c4" path="/var/lib/kubelet/pods/8b2b306d-f769-4e08-a123-d3847d9056c4/volumes" Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.477334 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fe3ecb-bc7e-44ac-bd01-bd775782552f" path="/var/lib/kubelet/pods/c0fe3ecb-bc7e-44ac-bd01-bd775782552f/volumes" Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.478190 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab831d7-de1f-4293-9ba8-ed1c29a9365a" path="/var/lib/kubelet/pods/cab831d7-de1f-4293-9ba8-ed1c29a9365a/volumes" Jan 27 07:38:08 crc kubenswrapper[4764]: I0127 07:38:08.907466 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.108507 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a9756bce-266e-4397-875e-8be9c3f383f9","Type":"ContainerStarted","Data":"4e18cb792527dea0635cb805f5561ca96f599ec544834f4cd7b40281382d5572"} Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.109632 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.117944 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3d920a5-7d67-483d-9150-fd6a434a3def","Type":"ContainerStarted","Data":"34661f72193209a8832d97aef7e5b60342830190459bc4a30ae4c61a8767dcf5"} Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.117996 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3d920a5-7d67-483d-9150-fd6a434a3def","Type":"ContainerStarted","Data":"c260bdf61e43cc5f4e4114e8dcebbdd549f2bb474bba060cd53bfeae2ba0b2bf"} Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.118010 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3d920a5-7d67-483d-9150-fd6a434a3def","Type":"ContainerStarted","Data":"9113404e0fce32b6e58261b625eeaa1413f802c9c66a63308919f1bdff7e314c"} Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.126089 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887d1985-52c3-476c-bd70-e82a17852b13","Type":"ContainerStarted","Data":"cc0e9339915f4830778898199221977a555383ca25c8c6f8e50e89e3337911eb"} Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.134072 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.13405355 podStartE2EDuration="2.13405355s" podCreationTimestamp="2026-01-27 07:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:09.130972387 +0000 UTC m=+1301.726594923" watchObservedRunningTime="2026-01-27 07:38:09.13405355 +0000 UTC m=+1301.729676076" Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.156006 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.155989121 podStartE2EDuration="2.155989121s" podCreationTimestamp="2026-01-27 07:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:09.150999517 +0000 UTC m=+1301.746622053" watchObservedRunningTime="2026-01-27 07:38:09.155989121 +0000 UTC m=+1301.751611647" Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.185782 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.185766824 podStartE2EDuration="2.185766824s" podCreationTimestamp="2026-01-27 07:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:09.180962844 +0000 UTC m=+1301.776585370" watchObservedRunningTime="2026-01-27 07:38:09.185766824 +0000 UTC m=+1301.781389340" Jan 27 07:38:09 crc kubenswrapper[4764]: I0127 07:38:09.820672 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:10 crc kubenswrapper[4764]: I0127 07:38:10.450729 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:38:10 crc kubenswrapper[4764]: I0127 07:38:10.450791 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 07:38:12 crc kubenswrapper[4764]: I0127 07:38:12.429284 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:12 crc kubenswrapper[4764]: I0127 07:38:12.450166 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:12 crc kubenswrapper[4764]: I0127 07:38:12.578614 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 07:38:13 crc kubenswrapper[4764]: I0127 07:38:13.058976 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="rabbitmq" containerID="cri-o://19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830" gracePeriod=604796 Jan 27 07:38:13 crc kubenswrapper[4764]: I0127 07:38:13.174694 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 07:38:13 crc kubenswrapper[4764]: I0127 07:38:13.521368 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 07:38:14 crc kubenswrapper[4764]: I0127 07:38:14.209788 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="rabbitmq" containerID="cri-o://52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189" gracePeriod=604796 Jan 27 07:38:15 crc kubenswrapper[4764]: I0127 07:38:15.449324 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:38:15 crc kubenswrapper[4764]: I0127 07:38:15.449662 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 07:38:16 crc kubenswrapper[4764]: I0127 07:38:16.310634 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 27 07:38:16 crc kubenswrapper[4764]: I0127 07:38:16.377793 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 27 07:38:16 crc kubenswrapper[4764]: I0127 07:38:16.463573 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:38:16 crc kubenswrapper[4764]: I0127 07:38:16.463571 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:38:17 crc kubenswrapper[4764]: I0127 07:38:17.578570 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 07:38:17 crc kubenswrapper[4764]: I0127 07:38:17.613926 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 07:38:17 crc kubenswrapper[4764]: I0127 07:38:17.644376 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 07:38:17 crc kubenswrapper[4764]: I0127 07:38:17.804114 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:38:17 crc kubenswrapper[4764]: I0127 07:38:17.804225 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 07:38:18 crc kubenswrapper[4764]: I0127 07:38:18.224428 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 07:38:18 crc kubenswrapper[4764]: I0127 07:38:18.817781 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d3d920a5-7d67-483d-9150-fd6a434a3def" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:38:18 crc kubenswrapper[4764]: I0127 07:38:18.818254 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d3d920a5-7d67-483d-9150-fd6a434a3def" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.769185 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.867751 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.867825 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr8bn\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.867932 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.867997 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.868025 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.868047 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.868092 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.868538 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.869206 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.868164 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.869294 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.869758 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.869799 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret\") pod \"09b6de15-11fa-47bd-8648-53a8ad02deda\" (UID: \"09b6de15-11fa-47bd-8648-53a8ad02deda\") " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.870514 4764 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.870532 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.870875 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.877671 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.893215 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.893781 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn" (OuterVolumeSpecName: "kube-api-access-wr8bn") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "kube-api-access-wr8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.903222 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.909261 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info" (OuterVolumeSpecName: "pod-info") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.930207 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data" (OuterVolumeSpecName: "config-data") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.972850 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973180 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973194 4764 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b6de15-11fa-47bd-8648-53a8ad02deda-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973227 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973239 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973248 4764 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b6de15-11fa-47bd-8648-53a8ad02deda-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:19 crc kubenswrapper[4764]: I0127 07:38:19.973258 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr8bn\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-kube-api-access-wr8bn\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.002174 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf" (OuterVolumeSpecName: "server-conf") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.016749 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.054550 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "09b6de15-11fa-47bd-8648-53a8ad02deda" (UID: "09b6de15-11fa-47bd-8648-53a8ad02deda"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.076492 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b6de15-11fa-47bd-8648-53a8ad02deda-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.076611 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.076631 4764 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b6de15-11fa-47bd-8648-53a8ad02deda-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.220189 4764 generic.go:334] "Generic (PLEG): container finished" podID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerID="19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830" exitCode=0 Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.220231 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerDied","Data":"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830"} Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.220256 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"09b6de15-11fa-47bd-8648-53a8ad02deda","Type":"ContainerDied","Data":"28fa5cd75a0c0b748101f8477d376cbd49b6a208dc574768967f3ced726be3f8"} Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.220274 4764 scope.go:117] "RemoveContainer" containerID="19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.220401 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.268325 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.280394 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.299723 4764 scope.go:117] "RemoveContainer" containerID="7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.304780 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:20 crc kubenswrapper[4764]: E0127 07:38:20.305265 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="rabbitmq" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.305283 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="rabbitmq" Jan 27 07:38:20 crc kubenswrapper[4764]: E0127 07:38:20.305301 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="setup-container" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.305310 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="setup-container" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.305564 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" containerName="rabbitmq" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.308392 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.313473 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.313736 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.313914 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.314064 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.314216 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7cpdn" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.314517 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.314687 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.320582 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.350690 4764 scope.go:117] "RemoveContainer" containerID="19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830" Jan 27 07:38:20 crc kubenswrapper[4764]: E0127 07:38:20.351291 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830\": container with ID starting with 19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830 not found: ID does not exist" containerID="19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.351339 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830"} err="failed to get container status \"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830\": rpc error: code = NotFound desc = could not find container \"19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830\": container with ID starting with 19e260b7f3d0038cb76920a11bb985a6d54a784e872a83954e832ecd22739830 not found: ID does not exist" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.351366 4764 scope.go:117] "RemoveContainer" containerID="7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8" Jan 27 07:38:20 crc kubenswrapper[4764]: E0127 07:38:20.355092 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8\": container with ID starting with 7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8 not found: ID does not exist" containerID="7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.355150 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8"} err="failed to get container status \"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8\": rpc error: code = NotFound desc = could not find container \"7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8\": container with ID starting with 7cf8c087f7bb79cc67cd5b03d9520736ddf7f78fffbac6b9955dc2013ff7d1d8 not found: ID does not exist" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.455336 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b6de15-11fa-47bd-8648-53a8ad02deda" path="/var/lib/kubelet/pods/09b6de15-11fa-47bd-8648-53a8ad02deda/volumes" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.483726 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95dd078e-042c-48b3-aa1c-f8f801d66ae0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.483883 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-config-data\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.483926 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.483987 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484060 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484176 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95dd078e-042c-48b3-aa1c-f8f801d66ae0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484246 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484314 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484383 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6kwq\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-kube-api-access-c6kwq\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484431 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.484479 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.585826 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95dd078e-042c-48b3-aa1c-f8f801d66ae0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.585922 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-config-data\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.585944 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.585968 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586027 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586081 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95dd078e-042c-48b3-aa1c-f8f801d66ae0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586105 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586124 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586163 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6kwq\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-kube-api-access-c6kwq\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586184 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586201 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.586492 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.587482 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-config-data\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.587492 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.587803 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.588103 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.588139 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95dd078e-042c-48b3-aa1c-f8f801d66ae0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.595420 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.595740 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.609812 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6kwq\" (UniqueName: \"kubernetes.io/projected/95dd078e-042c-48b3-aa1c-f8f801d66ae0-kube-api-access-c6kwq\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.610728 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95dd078e-042c-48b3-aa1c-f8f801d66ae0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.618244 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95dd078e-042c-48b3-aa1c-f8f801d66ae0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.627281 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"95dd078e-042c-48b3-aa1c-f8f801d66ae0\") " pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.636953 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.790994 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.890782 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.890842 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.890872 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.890907 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5ht\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.890981 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891023 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891050 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891077 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891095 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891167 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.891242 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie\") pod \"dff27bbf-49bf-4af7-aedb-e59e84269af3\" (UID: \"dff27bbf-49bf-4af7-aedb-e59e84269af3\") " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.892353 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.895654 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.896301 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.897056 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.897910 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.898567 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info" (OuterVolumeSpecName: "pod-info") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.900178 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.902990 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht" (OuterVolumeSpecName: "kube-api-access-cb5ht") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "kube-api-access-cb5ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.921642 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data" (OuterVolumeSpecName: "config-data") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.949815 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf" (OuterVolumeSpecName: "server-conf") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.994943 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.994991 4764 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dff27bbf-49bf-4af7-aedb-e59e84269af3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995001 4764 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995011 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995050 4764 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995060 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995069 4764 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dff27bbf-49bf-4af7-aedb-e59e84269af3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995076 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995084 4764 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dff27bbf-49bf-4af7-aedb-e59e84269af3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:20 crc kubenswrapper[4764]: I0127 07:38:20.995091 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5ht\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-kube-api-access-cb5ht\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.013156 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dff27bbf-49bf-4af7-aedb-e59e84269af3" (UID: "dff27bbf-49bf-4af7-aedb-e59e84269af3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.019605 4764 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.096554 4764 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dff27bbf-49bf-4af7-aedb-e59e84269af3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.096906 4764 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.145251 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.236019 4764 generic.go:334] "Generic (PLEG): container finished" podID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerID="52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189" exitCode=0 Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.236066 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerDied","Data":"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189"} Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.236120 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.236143 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dff27bbf-49bf-4af7-aedb-e59e84269af3","Type":"ContainerDied","Data":"b1b8210521394b647bf3d6f0b153a2392a902f22622166f7cdfd1da1c7e204a5"} Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.236170 4764 scope.go:117] "RemoveContainer" containerID="52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.240307 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95dd078e-042c-48b3-aa1c-f8f801d66ae0","Type":"ContainerStarted","Data":"1d8a53aad52249d486ac4c89584f95e77eade699db010f285d76c9e445b295b5"} Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.271057 4764 scope.go:117] "RemoveContainer" containerID="0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.282642 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.304576 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.313790 4764 scope.go:117] "RemoveContainer" containerID="52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189" Jan 27 07:38:21 crc kubenswrapper[4764]: E0127 07:38:21.314360 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189\": container with ID starting with 52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189 not found: ID does not exist" containerID="52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.314393 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189"} err="failed to get container status \"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189\": rpc error: code = NotFound desc = could not find container \"52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189\": container with ID starting with 52f571441173031d7095b1db5425a1a6336e4fb113d109acb462f3fd5ac3e189 not found: ID does not exist" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.314414 4764 scope.go:117] "RemoveContainer" containerID="0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a" Jan 27 07:38:21 crc kubenswrapper[4764]: E0127 07:38:21.317068 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a\": container with ID starting with 0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a not found: ID does not exist" containerID="0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.317113 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a"} err="failed to get container status \"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a\": rpc error: code = NotFound desc = could not find container \"0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a\": container with ID starting with 0f948a5f4a0055c37d583623aa9a8da2ad07c21f53c5bf443ee8a18cbc876f1a not found: ID does not exist" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.319816 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:21 crc kubenswrapper[4764]: E0127 07:38:21.320253 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="rabbitmq" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.320275 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="rabbitmq" Jan 27 07:38:21 crc kubenswrapper[4764]: E0127 07:38:21.320310 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="setup-container" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.320317 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="setup-container" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.320508 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" containerName="rabbitmq" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.321711 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.324339 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.324666 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.324847 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.324996 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.329872 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.330415 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.330803 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z484r" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.333544 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404613 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404661 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404708 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pg8\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-kube-api-access-j5pg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404762 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404814 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404860 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404960 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.404990 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.405107 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.405161 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506361 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506429 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506482 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5pg8\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-kube-api-access-j5pg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506515 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506532 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506556 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506680 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506697 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506801 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506817 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.506884 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.507689 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.507995 4764 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.510183 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.513190 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.513256 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.513647 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.515543 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.516803 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.522384 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.533522 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5pg8\" (UniqueName: \"kubernetes.io/projected/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-kube-api-access-j5pg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.535309 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c1fbc57c-38ee-49be-bee5-4b04c5ef3211-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.554666 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c1fbc57c-38ee-49be-bee5-4b04c5ef3211\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.653338 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.917109 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-tcjv2"] Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.918655 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.920977 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 07:38:21 crc kubenswrapper[4764]: I0127 07:38:21.933841 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-tcjv2"] Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016218 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016277 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016303 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g5sc\" (UniqueName: \"kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016343 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016381 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016403 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.016426 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.039170 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-tcjv2"] Jan 27 07:38:22 crc kubenswrapper[4764]: E0127 07:38:22.041091 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-2g5sc openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" podUID="fedd6d12-fd91-4732-a4da-af606fe17102" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.068105 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-s8lpl"] Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.069757 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.079090 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-s8lpl"] Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.118469 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-config\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.118595 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvrp\" (UniqueName: \"kubernetes.io/projected/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-kube-api-access-ltvrp\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.118842 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.118936 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.118992 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119024 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g5sc\" (UniqueName: \"kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119188 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119247 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119351 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119506 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119553 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119595 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119642 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.119669 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.120267 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.120344 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.120594 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.121280 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.121318 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.121712 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.172844 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g5sc\" (UniqueName: \"kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc\") pod \"dnsmasq-dns-668b55cdd7-tcjv2\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221307 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvrp\" (UniqueName: \"kubernetes.io/projected/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-kube-api-access-ltvrp\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221402 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221461 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221487 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221568 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221589 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.221612 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-config\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.222385 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-config\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.223248 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.224282 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.224641 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.224939 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.225005 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.230046 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.250785 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.259014 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.273777 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvrp\" (UniqueName: \"kubernetes.io/projected/4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b-kube-api-access-ltvrp\") pod \"dnsmasq-dns-66fc59ccbf-s8lpl\" (UID: \"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b\") " pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: W0127 07:38:22.277010 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fbc57c_38ee_49be_bee5_4b04c5ef3211.slice/crio-72156b0b0ee7005ec8a9ca61558d2fddbd7f2d9388980222a967bcebdb4d0e3f WatchSource:0}: Error finding container 72156b0b0ee7005ec8a9ca61558d2fddbd7f2d9388980222a967bcebdb4d0e3f: Status 404 returned error can't find the container with id 72156b0b0ee7005ec8a9ca61558d2fddbd7f2d9388980222a967bcebdb4d0e3f Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322681 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322756 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322790 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g5sc\" (UniqueName: \"kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322816 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322875 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322967 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.322990 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc\") pod \"fedd6d12-fd91-4732-a4da-af606fe17102\" (UID: \"fedd6d12-fd91-4732-a4da-af606fe17102\") " Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.323697 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.323958 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.324182 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.325642 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.325706 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config" (OuterVolumeSpecName: "config") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.326142 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.326949 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc" (OuterVolumeSpecName: "kube-api-access-2g5sc") pod "fedd6d12-fd91-4732-a4da-af606fe17102" (UID: "fedd6d12-fd91-4732-a4da-af606fe17102"). InnerVolumeSpecName "kube-api-access-2g5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.392816 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425223 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425296 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425308 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g5sc\" (UniqueName: \"kubernetes.io/projected/fedd6d12-fd91-4732-a4da-af606fe17102-kube-api-access-2g5sc\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425317 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425326 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425337 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.425362 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedd6d12-fd91-4732-a4da-af606fe17102-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.458167 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff27bbf-49bf-4af7-aedb-e59e84269af3" path="/var/lib/kubelet/pods/dff27bbf-49bf-4af7-aedb-e59e84269af3/volumes" Jan 27 07:38:22 crc kubenswrapper[4764]: I0127 07:38:22.883366 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-s8lpl"] Jan 27 07:38:22 crc kubenswrapper[4764]: W0127 07:38:22.889834 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dc32db6_6ef0_4a6b_b6bc_c8315ec7748b.slice/crio-28debc09342472d97398ad03ce8dae10ce6a8153c1cfe3a6ec9aa6c70bd18006 WatchSource:0}: Error finding container 28debc09342472d97398ad03ce8dae10ce6a8153c1cfe3a6ec9aa6c70bd18006: Status 404 returned error can't find the container with id 28debc09342472d97398ad03ce8dae10ce6a8153c1cfe3a6ec9aa6c70bd18006 Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.263489 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c1fbc57c-38ee-49be-bee5-4b04c5ef3211","Type":"ContainerStarted","Data":"72156b0b0ee7005ec8a9ca61558d2fddbd7f2d9388980222a967bcebdb4d0e3f"} Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.266940 4764 generic.go:334] "Generic (PLEG): container finished" podID="4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b" containerID="a5f44ed49619ce3cef7a280c9667105ea29c3144e0b8bef172697789078076f1" exitCode=0 Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.267035 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" event={"ID":"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b","Type":"ContainerDied","Data":"a5f44ed49619ce3cef7a280c9667105ea29c3144e0b8bef172697789078076f1"} Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.267069 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" event={"ID":"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b","Type":"ContainerStarted","Data":"28debc09342472d97398ad03ce8dae10ce6a8153c1cfe3a6ec9aa6c70bd18006"} Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.271007 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-tcjv2" Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.271728 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95dd078e-042c-48b3-aa1c-f8f801d66ae0","Type":"ContainerStarted","Data":"0dfc6ec2a29a70d9943b7c94fd497bd52a45d07b13dbf8504b582fd3d18e9d28"} Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.356370 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-tcjv2"] Jan 27 07:38:23 crc kubenswrapper[4764]: I0127 07:38:23.366554 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-tcjv2"] Jan 27 07:38:24 crc kubenswrapper[4764]: I0127 07:38:24.280666 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c1fbc57c-38ee-49be-bee5-4b04c5ef3211","Type":"ContainerStarted","Data":"e79379e68499e940087f83f55197bf2aea4fe00f8cffa4fa48badc7588960d22"} Jan 27 07:38:24 crc kubenswrapper[4764]: I0127 07:38:24.283058 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" event={"ID":"4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b","Type":"ContainerStarted","Data":"e2e0a24f4e51c5ef999cd98a4bf2ff89b5ff3daaacbe11fd355e03fa2dca2a3a"} Jan 27 07:38:24 crc kubenswrapper[4764]: I0127 07:38:24.283363 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:24 crc kubenswrapper[4764]: I0127 07:38:24.333924 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" podStartSLOduration=2.333906865 podStartE2EDuration="2.333906865s" podCreationTimestamp="2026-01-27 07:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:24.327313277 +0000 UTC m=+1316.922935823" watchObservedRunningTime="2026-01-27 07:38:24.333906865 +0000 UTC m=+1316.929529391" Jan 27 07:38:24 crc kubenswrapper[4764]: I0127 07:38:24.448148 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedd6d12-fd91-4732-a4da-af606fe17102" path="/var/lib/kubelet/pods/fedd6d12-fd91-4732-a4da-af606fe17102/volumes" Jan 27 07:38:25 crc kubenswrapper[4764]: I0127 07:38:25.457470 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:38:25 crc kubenswrapper[4764]: I0127 07:38:25.459277 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 07:38:25 crc kubenswrapper[4764]: I0127 07:38:25.464264 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:38:26 crc kubenswrapper[4764]: I0127 07:38:26.311914 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 07:38:27 crc kubenswrapper[4764]: I0127 07:38:27.814576 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:38:27 crc kubenswrapper[4764]: I0127 07:38:27.815463 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 07:38:27 crc kubenswrapper[4764]: I0127 07:38:27.815585 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:38:27 crc kubenswrapper[4764]: I0127 07:38:27.826057 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:38:28 crc kubenswrapper[4764]: I0127 07:38:28.326086 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 07:38:28 crc kubenswrapper[4764]: I0127 07:38:28.334784 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 07:38:32 crc kubenswrapper[4764]: I0127 07:38:32.395607 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66fc59ccbf-s8lpl" Jan 27 07:38:32 crc kubenswrapper[4764]: I0127 07:38:32.491206 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:38:32 crc kubenswrapper[4764]: I0127 07:38:32.491431 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="dnsmasq-dns" containerID="cri-o://fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957" gracePeriod=10 Jan 27 07:38:32 crc kubenswrapper[4764]: I0127 07:38:32.976513 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.067219 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64lwp\" (UniqueName: \"kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.067844 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.067906 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.068217 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.068430 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.068878 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb\") pod \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\" (UID: \"308a7a6e-bf8e-489d-bc3b-e858341c39b8\") " Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.092830 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp" (OuterVolumeSpecName: "kube-api-access-64lwp") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "kube-api-access-64lwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.131017 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config" (OuterVolumeSpecName: "config") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.132699 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.134088 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.151545 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.152940 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "308a7a6e-bf8e-489d-bc3b-e858341c39b8" (UID: "308a7a6e-bf8e-489d-bc3b-e858341c39b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172029 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64lwp\" (UniqueName: \"kubernetes.io/projected/308a7a6e-bf8e-489d-bc3b-e858341c39b8-kube-api-access-64lwp\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172305 4764 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172368 4764 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172425 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172499 4764 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.172561 4764 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/308a7a6e-bf8e-489d-bc3b-e858341c39b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.384578 4764 generic.go:334] "Generic (PLEG): container finished" podID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerID="fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957" exitCode=0 Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.384657 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" event={"ID":"308a7a6e-bf8e-489d-bc3b-e858341c39b8","Type":"ContainerDied","Data":"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957"} Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.384708 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" event={"ID":"308a7a6e-bf8e-489d-bc3b-e858341c39b8","Type":"ContainerDied","Data":"d43c3bb5c1613a3983c374318ff9d9cde83ce92e857cf935e1924fd0dd240bc4"} Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.384723 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-7p47r" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.384764 4764 scope.go:117] "RemoveContainer" containerID="fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.433143 4764 scope.go:117] "RemoveContainer" containerID="d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.450307 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.466724 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-7p47r"] Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.468741 4764 scope.go:117] "RemoveContainer" containerID="fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957" Jan 27 07:38:33 crc kubenswrapper[4764]: E0127 07:38:33.469259 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957\": container with ID starting with fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957 not found: ID does not exist" containerID="fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.469315 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957"} err="failed to get container status \"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957\": rpc error: code = NotFound desc = could not find container \"fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957\": container with ID starting with fd40a99eb4f83980959a7a383e7dcfa09743173cdd0d9cb746e8c1aa4a31a957 not found: ID does not exist" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.469351 4764 scope.go:117] "RemoveContainer" containerID="d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c" Jan 27 07:38:33 crc kubenswrapper[4764]: E0127 07:38:33.469788 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c\": container with ID starting with d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c not found: ID does not exist" containerID="d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c" Jan 27 07:38:33 crc kubenswrapper[4764]: I0127 07:38:33.469814 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c"} err="failed to get container status \"d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c\": rpc error: code = NotFound desc = could not find container \"d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c\": container with ID starting with d72f0ad2f6e0452a99ce4aeeaba10050fc8b4bc7df74b5426dbf9b0be671742c not found: ID does not exist" Jan 27 07:38:34 crc kubenswrapper[4764]: I0127 07:38:34.463574 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" path="/var/lib/kubelet/pods/308a7a6e-bf8e-489d-bc3b-e858341c39b8/volumes" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.936589 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt"] Jan 27 07:38:44 crc kubenswrapper[4764]: E0127 07:38:44.937815 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="init" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.937840 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="init" Jan 27 07:38:44 crc kubenswrapper[4764]: E0127 07:38:44.937908 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="dnsmasq-dns" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.937922 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="dnsmasq-dns" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.938211 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="308a7a6e-bf8e-489d-bc3b-e858341c39b8" containerName="dnsmasq-dns" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.939053 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.941352 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.941538 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.941746 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.947607 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:38:44 crc kubenswrapper[4764]: I0127 07:38:44.953838 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt"] Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.051585 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.051985 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.052226 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2xh\" (UniqueName: \"kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.052360 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.154025 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2xh\" (UniqueName: \"kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.154101 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.154184 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.154265 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.161110 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.162250 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.163932 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.176127 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2xh\" (UniqueName: \"kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.269876 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:38:45 crc kubenswrapper[4764]: I0127 07:38:45.827881 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt"] Jan 27 07:38:46 crc kubenswrapper[4764]: I0127 07:38:46.527476 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" event={"ID":"ff81f29d-57aa-4263-8554-6f4d4318bdd4","Type":"ContainerStarted","Data":"4866fa3296b44974dfe7a364a35180c50e857b96db6138aa9b5566b966c03e6d"} Jan 27 07:38:54 crc kubenswrapper[4764]: I0127 07:38:54.607889 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" event={"ID":"ff81f29d-57aa-4263-8554-6f4d4318bdd4","Type":"ContainerStarted","Data":"956c15ff38a246a0be71211ff572dbef1b4e0bd476368e24107c00c3ea4e9d25"} Jan 27 07:38:54 crc kubenswrapper[4764]: I0127 07:38:54.629213 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" podStartSLOduration=2.235547761 podStartE2EDuration="10.62919426s" podCreationTimestamp="2026-01-27 07:38:44 +0000 UTC" firstStartedPulling="2026-01-27 07:38:45.826883567 +0000 UTC m=+1338.422506113" lastFinishedPulling="2026-01-27 07:38:54.220530086 +0000 UTC m=+1346.816152612" observedRunningTime="2026-01-27 07:38:54.625427649 +0000 UTC m=+1347.221050205" watchObservedRunningTime="2026-01-27 07:38:54.62919426 +0000 UTC m=+1347.224816786" Jan 27 07:38:55 crc kubenswrapper[4764]: I0127 07:38:55.620221 4764 generic.go:334] "Generic (PLEG): container finished" podID="95dd078e-042c-48b3-aa1c-f8f801d66ae0" containerID="0dfc6ec2a29a70d9943b7c94fd497bd52a45d07b13dbf8504b582fd3d18e9d28" exitCode=0 Jan 27 07:38:55 crc kubenswrapper[4764]: I0127 07:38:55.620276 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95dd078e-042c-48b3-aa1c-f8f801d66ae0","Type":"ContainerDied","Data":"0dfc6ec2a29a70d9943b7c94fd497bd52a45d07b13dbf8504b582fd3d18e9d28"} Jan 27 07:38:56 crc kubenswrapper[4764]: I0127 07:38:56.643820 4764 generic.go:334] "Generic (PLEG): container finished" podID="c1fbc57c-38ee-49be-bee5-4b04c5ef3211" containerID="e79379e68499e940087f83f55197bf2aea4fe00f8cffa4fa48badc7588960d22" exitCode=0 Jan 27 07:38:56 crc kubenswrapper[4764]: I0127 07:38:56.643882 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c1fbc57c-38ee-49be-bee5-4b04c5ef3211","Type":"ContainerDied","Data":"e79379e68499e940087f83f55197bf2aea4fe00f8cffa4fa48badc7588960d22"} Jan 27 07:38:56 crc kubenswrapper[4764]: I0127 07:38:56.649750 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95dd078e-042c-48b3-aa1c-f8f801d66ae0","Type":"ContainerStarted","Data":"bf0ba8b13e695087849991899797a14b14bc32a9a333c61dfd361471170ac3b1"} Jan 27 07:38:56 crc kubenswrapper[4764]: I0127 07:38:56.650038 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 07:38:56 crc kubenswrapper[4764]: I0127 07:38:56.709756 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.709733953 podStartE2EDuration="36.709733953s" podCreationTimestamp="2026-01-27 07:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:56.699576999 +0000 UTC m=+1349.295199555" watchObservedRunningTime="2026-01-27 07:38:56.709733953 +0000 UTC m=+1349.305356479" Jan 27 07:38:57 crc kubenswrapper[4764]: I0127 07:38:57.659792 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c1fbc57c-38ee-49be-bee5-4b04c5ef3211","Type":"ContainerStarted","Data":"0b5eb52703bc1f1dc1151f1d41d07019266e1844988d4dab74e417dd8a22e13e"} Jan 27 07:38:57 crc kubenswrapper[4764]: I0127 07:38:57.660288 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:38:57 crc kubenswrapper[4764]: I0127 07:38:57.690090 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.690066744 podStartE2EDuration="36.690066744s" podCreationTimestamp="2026-01-27 07:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:38:57.680405874 +0000 UTC m=+1350.276028410" watchObservedRunningTime="2026-01-27 07:38:57.690066744 +0000 UTC m=+1350.285689270" Jan 27 07:39:06 crc kubenswrapper[4764]: I0127 07:39:06.742348 4764 generic.go:334] "Generic (PLEG): container finished" podID="ff81f29d-57aa-4263-8554-6f4d4318bdd4" containerID="956c15ff38a246a0be71211ff572dbef1b4e0bd476368e24107c00c3ea4e9d25" exitCode=0 Jan 27 07:39:06 crc kubenswrapper[4764]: I0127 07:39:06.742413 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" event={"ID":"ff81f29d-57aa-4263-8554-6f4d4318bdd4","Type":"ContainerDied","Data":"956c15ff38a246a0be71211ff572dbef1b4e0bd476368e24107c00c3ea4e9d25"} Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.186369 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.233230 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam\") pod \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.233627 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory\") pod \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.233724 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle\") pod \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.233770 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2xh\" (UniqueName: \"kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh\") pod \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\" (UID: \"ff81f29d-57aa-4263-8554-6f4d4318bdd4\") " Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.238995 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ff81f29d-57aa-4263-8554-6f4d4318bdd4" (UID: "ff81f29d-57aa-4263-8554-6f4d4318bdd4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.241593 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh" (OuterVolumeSpecName: "kube-api-access-qz2xh") pod "ff81f29d-57aa-4263-8554-6f4d4318bdd4" (UID: "ff81f29d-57aa-4263-8554-6f4d4318bdd4"). InnerVolumeSpecName "kube-api-access-qz2xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.261513 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory" (OuterVolumeSpecName: "inventory") pod "ff81f29d-57aa-4263-8554-6f4d4318bdd4" (UID: "ff81f29d-57aa-4263-8554-6f4d4318bdd4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.273824 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff81f29d-57aa-4263-8554-6f4d4318bdd4" (UID: "ff81f29d-57aa-4263-8554-6f4d4318bdd4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.335821 4764 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.335856 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2xh\" (UniqueName: \"kubernetes.io/projected/ff81f29d-57aa-4263-8554-6f4d4318bdd4-kube-api-access-qz2xh\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.335867 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.335877 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff81f29d-57aa-4263-8554-6f4d4318bdd4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.766718 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" event={"ID":"ff81f29d-57aa-4263-8554-6f4d4318bdd4","Type":"ContainerDied","Data":"4866fa3296b44974dfe7a364a35180c50e857b96db6138aa9b5566b966c03e6d"} Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.767166 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4866fa3296b44974dfe7a364a35180c50e857b96db6138aa9b5566b966c03e6d" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.766804 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.870051 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb"] Jan 27 07:39:08 crc kubenswrapper[4764]: E0127 07:39:08.870483 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff81f29d-57aa-4263-8554-6f4d4318bdd4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.870498 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff81f29d-57aa-4263-8554-6f4d4318bdd4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.870677 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff81f29d-57aa-4263-8554-6f4d4318bdd4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.871335 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.877143 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.877319 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.877497 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.877943 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.902707 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb"] Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.949608 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.949720 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:08 crc kubenswrapper[4764]: I0127 07:39:08.949846 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6n5n\" (UniqueName: \"kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.052169 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.052308 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6n5n\" (UniqueName: \"kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.052372 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.057218 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.058807 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.068136 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6n5n\" (UniqueName: \"kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hnb\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.200178 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.747674 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb"] Jan 27 07:39:09 crc kubenswrapper[4764]: W0127 07:39:09.752044 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c07d3a_5dda_4260_81a0_af6bb112ea25.slice/crio-9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da WatchSource:0}: Error finding container 9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da: Status 404 returned error can't find the container with id 9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da Jan 27 07:39:09 crc kubenswrapper[4764]: I0127 07:39:09.776263 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" event={"ID":"42c07d3a-5dda-4260-81a0-af6bb112ea25","Type":"ContainerStarted","Data":"9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da"} Jan 27 07:39:10 crc kubenswrapper[4764]: I0127 07:39:10.640880 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 07:39:10 crc kubenswrapper[4764]: I0127 07:39:10.791808 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" event={"ID":"42c07d3a-5dda-4260-81a0-af6bb112ea25","Type":"ContainerStarted","Data":"baead9518b20631ceb296d538c8652e42827a86175bd2cff1bf19c7b941a90d4"} Jan 27 07:39:10 crc kubenswrapper[4764]: I0127 07:39:10.813946 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" podStartSLOduration=2.412797898 podStartE2EDuration="2.813926828s" podCreationTimestamp="2026-01-27 07:39:08 +0000 UTC" firstStartedPulling="2026-01-27 07:39:09.754249729 +0000 UTC m=+1362.349872265" lastFinishedPulling="2026-01-27 07:39:10.155378669 +0000 UTC m=+1362.751001195" observedRunningTime="2026-01-27 07:39:10.811372159 +0000 UTC m=+1363.406994705" watchObservedRunningTime="2026-01-27 07:39:10.813926828 +0000 UTC m=+1363.409549354" Jan 27 07:39:11 crc kubenswrapper[4764]: I0127 07:39:11.657633 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 07:39:13 crc kubenswrapper[4764]: I0127 07:39:13.825207 4764 generic.go:334] "Generic (PLEG): container finished" podID="42c07d3a-5dda-4260-81a0-af6bb112ea25" containerID="baead9518b20631ceb296d538c8652e42827a86175bd2cff1bf19c7b941a90d4" exitCode=0 Jan 27 07:39:13 crc kubenswrapper[4764]: I0127 07:39:13.825298 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" event={"ID":"42c07d3a-5dda-4260-81a0-af6bb112ea25","Type":"ContainerDied","Data":"baead9518b20631ceb296d538c8652e42827a86175bd2cff1bf19c7b941a90d4"} Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.263167 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.401541 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory\") pod \"42c07d3a-5dda-4260-81a0-af6bb112ea25\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.401667 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam\") pod \"42c07d3a-5dda-4260-81a0-af6bb112ea25\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.401878 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6n5n\" (UniqueName: \"kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n\") pod \"42c07d3a-5dda-4260-81a0-af6bb112ea25\" (UID: \"42c07d3a-5dda-4260-81a0-af6bb112ea25\") " Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.410645 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n" (OuterVolumeSpecName: "kube-api-access-r6n5n") pod "42c07d3a-5dda-4260-81a0-af6bb112ea25" (UID: "42c07d3a-5dda-4260-81a0-af6bb112ea25"). InnerVolumeSpecName "kube-api-access-r6n5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.431797 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory" (OuterVolumeSpecName: "inventory") pod "42c07d3a-5dda-4260-81a0-af6bb112ea25" (UID: "42c07d3a-5dda-4260-81a0-af6bb112ea25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.433310 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "42c07d3a-5dda-4260-81a0-af6bb112ea25" (UID: "42c07d3a-5dda-4260-81a0-af6bb112ea25"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.504642 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6n5n\" (UniqueName: \"kubernetes.io/projected/42c07d3a-5dda-4260-81a0-af6bb112ea25-kube-api-access-r6n5n\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.504991 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.505004 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c07d3a-5dda-4260-81a0-af6bb112ea25-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.842771 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" event={"ID":"42c07d3a-5dda-4260-81a0-af6bb112ea25","Type":"ContainerDied","Data":"9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da"} Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.843046 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f02b3bba2a72602e599382044ec0fe2e419fcc3ce02659188529a04d68678da" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.842867 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hnb" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.926427 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv"] Jan 27 07:39:15 crc kubenswrapper[4764]: E0127 07:39:15.926824 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c07d3a-5dda-4260-81a0-af6bb112ea25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.926844 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c07d3a-5dda-4260-81a0-af6bb112ea25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.927027 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c07d3a-5dda-4260-81a0-af6bb112ea25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.927692 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.929655 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.929931 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.930155 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.930587 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:39:15 crc kubenswrapper[4764]: I0127 07:39:15.939858 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv"] Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.015959 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.016010 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqm5\" (UniqueName: \"kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.016047 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.016247 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.117292 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.118455 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crqm5\" (UniqueName: \"kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.118510 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.118588 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.122429 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.122521 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.131197 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.133715 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crqm5\" (UniqueName: \"kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.255892 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:39:16 crc kubenswrapper[4764]: I0127 07:39:16.862962 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv"] Jan 27 07:39:17 crc kubenswrapper[4764]: I0127 07:39:17.860977 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" event={"ID":"6f8df847-d027-42ca-a466-a8ccd60b9428","Type":"ContainerStarted","Data":"3b9d13483727152eec66058d10d6574e6b2c4c0a26bae675ba1a296209b35ca7"} Jan 27 07:39:17 crc kubenswrapper[4764]: I0127 07:39:17.861304 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" event={"ID":"6f8df847-d027-42ca-a466-a8ccd60b9428","Type":"ContainerStarted","Data":"3bb12c97ac967cdc546d3d31230cb393d1c35270fb5103ff522758a555edc618"} Jan 27 07:39:17 crc kubenswrapper[4764]: I0127 07:39:17.893050 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" podStartSLOduration=2.475402434 podStartE2EDuration="2.893031589s" podCreationTimestamp="2026-01-27 07:39:15 +0000 UTC" firstStartedPulling="2026-01-27 07:39:16.858331963 +0000 UTC m=+1369.453954489" lastFinishedPulling="2026-01-27 07:39:17.275961098 +0000 UTC m=+1369.871583644" observedRunningTime="2026-01-27 07:39:17.884874549 +0000 UTC m=+1370.480497075" watchObservedRunningTime="2026-01-27 07:39:17.893031589 +0000 UTC m=+1370.488654115" Jan 27 07:39:40 crc kubenswrapper[4764]: I0127 07:39:40.712569 4764 scope.go:117] "RemoveContainer" containerID="899ef6d9606da1a8acae8a0882e2b66b47b4f7084bace38a2928a69f1cbb4c87" Jan 27 07:39:40 crc kubenswrapper[4764]: I0127 07:39:40.745818 4764 scope.go:117] "RemoveContainer" containerID="2f3bc34933aeb975c4fe9701d8626d55f49ca5136ccc5756d0fcfbe09632b069" Jan 27 07:39:40 crc kubenswrapper[4764]: I0127 07:39:40.832336 4764 scope.go:117] "RemoveContainer" containerID="9ccf0fb8f0b787b29f645f649929b3330a65277eaa21306d7909fad4ba59d538" Jan 27 07:39:53 crc kubenswrapper[4764]: I0127 07:39:53.762586 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:39:53 crc kubenswrapper[4764]: I0127 07:39:53.763271 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:40:23 crc kubenswrapper[4764]: I0127 07:40:23.762926 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:40:23 crc kubenswrapper[4764]: I0127 07:40:23.763847 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:40:40 crc kubenswrapper[4764]: I0127 07:40:40.983498 4764 scope.go:117] "RemoveContainer" containerID="7bb5665051cd51c41c15030e1ceaaa3d898c261b0fc4d0f29a579b2e1e40b7b0" Jan 27 07:40:41 crc kubenswrapper[4764]: I0127 07:40:41.016722 4764 scope.go:117] "RemoveContainer" containerID="7eee264c2b4c3cf8241df972dc54f7dc10315bf28cc242bd49f90a9e5e8f55b8" Jan 27 07:40:41 crc kubenswrapper[4764]: I0127 07:40:41.048940 4764 scope.go:117] "RemoveContainer" containerID="d6889f54e291162dba7902a4ed27ed0fcc11a596c4d7b19375b415ed8a2be5ee" Jan 27 07:40:41 crc kubenswrapper[4764]: I0127 07:40:41.094863 4764 scope.go:117] "RemoveContainer" containerID="f34b20524afae895db21235bb2011c9700c5edf74ecc839e9a0c2bc527b60495" Jan 27 07:40:41 crc kubenswrapper[4764]: I0127 07:40:41.167798 4764 scope.go:117] "RemoveContainer" containerID="2ff4f7a80c720e638df5f2ea08f124a62ecacfeb4c71747f31563dc19462b7d1" Jan 27 07:40:53 crc kubenswrapper[4764]: I0127 07:40:53.762603 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:40:53 crc kubenswrapper[4764]: I0127 07:40:53.763247 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:40:53 crc kubenswrapper[4764]: I0127 07:40:53.763323 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:40:53 crc kubenswrapper[4764]: I0127 07:40:53.764477 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:40:53 crc kubenswrapper[4764]: I0127 07:40:53.764563 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4" gracePeriod=600 Jan 27 07:40:54 crc kubenswrapper[4764]: I0127 07:40:54.885461 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4" exitCode=0 Jan 27 07:40:54 crc kubenswrapper[4764]: I0127 07:40:54.885553 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4"} Jan 27 07:40:54 crc kubenswrapper[4764]: I0127 07:40:54.886263 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6"} Jan 27 07:40:54 crc kubenswrapper[4764]: I0127 07:40:54.886297 4764 scope.go:117] "RemoveContainer" containerID="1a6582187df2e5e6ef1f7d9ea2e06ec2178aed71a06db6ecea42208449605756" Jan 27 07:41:41 crc kubenswrapper[4764]: I0127 07:41:41.281952 4764 scope.go:117] "RemoveContainer" containerID="5c130bcdffd3247680a388e07f188696544349d9bec5053bd03f18f6d64dd3a9" Jan 27 07:41:41 crc kubenswrapper[4764]: I0127 07:41:41.302626 4764 scope.go:117] "RemoveContainer" containerID="4ae43e1bd6886cc6de4b6070681b1f52c0ccbbaa7779cca6a72840027ce9dbfa" Jan 27 07:41:41 crc kubenswrapper[4764]: I0127 07:41:41.321532 4764 scope.go:117] "RemoveContainer" containerID="097b9f1173d270d7e7fdc6be78a5d6438382a0932a03abdc51c1c16ab934723a" Jan 27 07:41:41 crc kubenswrapper[4764]: I0127 07:41:41.350263 4764 scope.go:117] "RemoveContainer" containerID="e43a28bf9f2893daea65f365081c9ba1c9c3cf8b6cb7d3ed556d2cc2b5ea32ae" Jan 27 07:42:17 crc kubenswrapper[4764]: I0127 07:42:17.054294 4764 generic.go:334] "Generic (PLEG): container finished" podID="6f8df847-d027-42ca-a466-a8ccd60b9428" containerID="3b9d13483727152eec66058d10d6574e6b2c4c0a26bae675ba1a296209b35ca7" exitCode=0 Jan 27 07:42:17 crc kubenswrapper[4764]: I0127 07:42:17.054854 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" event={"ID":"6f8df847-d027-42ca-a466-a8ccd60b9428","Type":"ContainerDied","Data":"3b9d13483727152eec66058d10d6574e6b2c4c0a26bae675ba1a296209b35ca7"} Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.466606 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.572387 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crqm5\" (UniqueName: \"kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5\") pod \"6f8df847-d027-42ca-a466-a8ccd60b9428\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.572511 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") pod \"6f8df847-d027-42ca-a466-a8ccd60b9428\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.572646 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory\") pod \"6f8df847-d027-42ca-a466-a8ccd60b9428\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.572728 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle\") pod \"6f8df847-d027-42ca-a466-a8ccd60b9428\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.583336 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5" (OuterVolumeSpecName: "kube-api-access-crqm5") pod "6f8df847-d027-42ca-a466-a8ccd60b9428" (UID: "6f8df847-d027-42ca-a466-a8ccd60b9428"). InnerVolumeSpecName "kube-api-access-crqm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.583574 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6f8df847-d027-42ca-a466-a8ccd60b9428" (UID: "6f8df847-d027-42ca-a466-a8ccd60b9428"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:42:18 crc kubenswrapper[4764]: E0127 07:42:18.604783 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam podName:6f8df847-d027-42ca-a466-a8ccd60b9428 nodeName:}" failed. No retries permitted until 2026-01-27 07:42:19.104753277 +0000 UTC m=+1551.700375803 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam") pod "6f8df847-d027-42ca-a466-a8ccd60b9428" (UID: "6f8df847-d027-42ca-a466-a8ccd60b9428") : error deleting /var/lib/kubelet/pods/6f8df847-d027-42ca-a466-a8ccd60b9428/volume-subpaths: remove /var/lib/kubelet/pods/6f8df847-d027-42ca-a466-a8ccd60b9428/volume-subpaths: no such file or directory Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.607619 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory" (OuterVolumeSpecName: "inventory") pod "6f8df847-d027-42ca-a466-a8ccd60b9428" (UID: "6f8df847-d027-42ca-a466-a8ccd60b9428"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.675787 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.675811 4764 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:42:18 crc kubenswrapper[4764]: I0127 07:42:18.675822 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crqm5\" (UniqueName: \"kubernetes.io/projected/6f8df847-d027-42ca-a466-a8ccd60b9428-kube-api-access-crqm5\") on node \"crc\" DevicePath \"\"" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.077076 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" event={"ID":"6f8df847-d027-42ca-a466-a8ccd60b9428","Type":"ContainerDied","Data":"3bb12c97ac967cdc546d3d31230cb393d1c35270fb5103ff522758a555edc618"} Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.077146 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb12c97ac967cdc546d3d31230cb393d1c35270fb5103ff522758a555edc618" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.077155 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.186916 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") pod \"6f8df847-d027-42ca-a466-a8ccd60b9428\" (UID: \"6f8df847-d027-42ca-a466-a8ccd60b9428\") " Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.191721 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f8df847-d027-42ca-a466-a8ccd60b9428" (UID: "6f8df847-d027-42ca-a466-a8ccd60b9428"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.226135 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv"] Jan 27 07:42:19 crc kubenswrapper[4764]: E0127 07:42:19.226571 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8df847-d027-42ca-a466-a8ccd60b9428" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.226592 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8df847-d027-42ca-a466-a8ccd60b9428" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.226788 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8df847-d027-42ca-a466-a8ccd60b9428" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.227348 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.240303 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv"] Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.289232 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f8df847-d027-42ca-a466-a8ccd60b9428-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.390271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvvs\" (UniqueName: \"kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.390402 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.390567 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.491752 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.491824 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.491901 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvvs\" (UniqueName: \"kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.498596 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.502712 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.518736 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvvs\" (UniqueName: \"kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:19 crc kubenswrapper[4764]: I0127 07:42:19.573708 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:42:20 crc kubenswrapper[4764]: I0127 07:42:20.075185 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv"] Jan 27 07:42:20 crc kubenswrapper[4764]: W0127 07:42:20.082168 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b37bfd5_b31b_489d_a973_ffeeb769660c.slice/crio-f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de WatchSource:0}: Error finding container f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de: Status 404 returned error can't find the container with id f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de Jan 27 07:42:20 crc kubenswrapper[4764]: I0127 07:42:20.085800 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:42:21 crc kubenswrapper[4764]: I0127 07:42:21.099265 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" event={"ID":"8b37bfd5-b31b-489d-a973-ffeeb769660c","Type":"ContainerStarted","Data":"f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de"} Jan 27 07:42:22 crc kubenswrapper[4764]: I0127 07:42:22.108366 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" event={"ID":"8b37bfd5-b31b-489d-a973-ffeeb769660c","Type":"ContainerStarted","Data":"fb92aac4bc950ee30903f0b0d0f7ba2f67c66cc0e77174a1ad4fe9bfdb746f3c"} Jan 27 07:42:22 crc kubenswrapper[4764]: I0127 07:42:22.133419 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" podStartSLOduration=2.301774713 podStartE2EDuration="3.133405207s" podCreationTimestamp="2026-01-27 07:42:19 +0000 UTC" firstStartedPulling="2026-01-27 07:42:20.085413514 +0000 UTC m=+1552.681036030" lastFinishedPulling="2026-01-27 07:42:20.917043998 +0000 UTC m=+1553.512666524" observedRunningTime="2026-01-27 07:42:22.126494312 +0000 UTC m=+1554.722116868" watchObservedRunningTime="2026-01-27 07:42:22.133405207 +0000 UTC m=+1554.729027733" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.585792 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.595527 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.595644 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.718848 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.719220 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.719427 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9w99\" (UniqueName: \"kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.821138 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.821257 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.821414 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9w99\" (UniqueName: \"kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.821702 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.821925 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.844867 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9w99\" (UniqueName: \"kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99\") pod \"certified-operators-hv88b\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:49 crc kubenswrapper[4764]: I0127 07:42:49.923617 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:50 crc kubenswrapper[4764]: I0127 07:42:50.402033 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:42:51 crc kubenswrapper[4764]: I0127 07:42:51.397225 4764 generic.go:334] "Generic (PLEG): container finished" podID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerID="2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227" exitCode=0 Jan 27 07:42:51 crc kubenswrapper[4764]: I0127 07:42:51.397352 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerDied","Data":"2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227"} Jan 27 07:42:51 crc kubenswrapper[4764]: I0127 07:42:51.397676 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerStarted","Data":"557335e156197bc5160b2cd930a9e5ad1e99bd55f762ba8d005f6db65b89bc1a"} Jan 27 07:42:52 crc kubenswrapper[4764]: I0127 07:42:52.418072 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerStarted","Data":"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999"} Jan 27 07:42:53 crc kubenswrapper[4764]: I0127 07:42:53.427814 4764 generic.go:334] "Generic (PLEG): container finished" podID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerID="1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999" exitCode=0 Jan 27 07:42:53 crc kubenswrapper[4764]: I0127 07:42:53.427870 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerDied","Data":"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999"} Jan 27 07:42:54 crc kubenswrapper[4764]: I0127 07:42:54.464362 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerStarted","Data":"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087"} Jan 27 07:42:54 crc kubenswrapper[4764]: I0127 07:42:54.499719 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hv88b" podStartSLOduration=3.073514195 podStartE2EDuration="5.499698066s" podCreationTimestamp="2026-01-27 07:42:49 +0000 UTC" firstStartedPulling="2026-01-27 07:42:51.401999569 +0000 UTC m=+1583.997622115" lastFinishedPulling="2026-01-27 07:42:53.82818346 +0000 UTC m=+1586.423805986" observedRunningTime="2026-01-27 07:42:54.488413203 +0000 UTC m=+1587.084035759" watchObservedRunningTime="2026-01-27 07:42:54.499698066 +0000 UTC m=+1587.095320592" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.143411 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.145861 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.153987 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.261271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfncv\" (UniqueName: \"kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.261501 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.261561 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.363182 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfncv\" (UniqueName: \"kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.363569 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.363606 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.364196 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.364432 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.385242 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfncv\" (UniqueName: \"kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv\") pod \"community-operators-8fcn8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.475539 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:42:56 crc kubenswrapper[4764]: I0127 07:42:56.977177 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:42:56 crc kubenswrapper[4764]: W0127 07:42:56.983058 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode928516c_88fd_422e_899f_97ea199f63a8.slice/crio-2bc540e96138b7e63453e780ff1e51f1108771254205e8505cf49798a1da0291 WatchSource:0}: Error finding container 2bc540e96138b7e63453e780ff1e51f1108771254205e8505cf49798a1da0291: Status 404 returned error can't find the container with id 2bc540e96138b7e63453e780ff1e51f1108771254205e8505cf49798a1da0291 Jan 27 07:42:57 crc kubenswrapper[4764]: I0127 07:42:57.498373 4764 generic.go:334] "Generic (PLEG): container finished" podID="e928516c-88fd-422e-899f-97ea199f63a8" containerID="75e90b19bc20b6c372b6bfef0c798d8cc275bd02639fc0e99399c47eef2854f2" exitCode=0 Jan 27 07:42:57 crc kubenswrapper[4764]: I0127 07:42:57.498526 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerDied","Data":"75e90b19bc20b6c372b6bfef0c798d8cc275bd02639fc0e99399c47eef2854f2"} Jan 27 07:42:57 crc kubenswrapper[4764]: I0127 07:42:57.499062 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerStarted","Data":"2bc540e96138b7e63453e780ff1e51f1108771254205e8505cf49798a1da0291"} Jan 27 07:42:59 crc kubenswrapper[4764]: I0127 07:42:59.924447 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:59 crc kubenswrapper[4764]: I0127 07:42:59.924989 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:42:59 crc kubenswrapper[4764]: I0127 07:42:59.976511 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:43:00 crc kubenswrapper[4764]: I0127 07:43:00.527861 4764 generic.go:334] "Generic (PLEG): container finished" podID="e928516c-88fd-422e-899f-97ea199f63a8" containerID="ff1a5e2752b20985dd12e9a6d97e2d36161f94a57bf177a1d5d4c07c02ad21b8" exitCode=0 Jan 27 07:43:00 crc kubenswrapper[4764]: I0127 07:43:00.527919 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerDied","Data":"ff1a5e2752b20985dd12e9a6d97e2d36161f94a57bf177a1d5d4c07c02ad21b8"} Jan 27 07:43:00 crc kubenswrapper[4764]: I0127 07:43:00.592187 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:43:01 crc kubenswrapper[4764]: I0127 07:43:01.539010 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerStarted","Data":"b6c07cbd731acdc3bca506cbc29dbf5e596858de629b32f9e2db36e302fda793"} Jan 27 07:43:01 crc kubenswrapper[4764]: I0127 07:43:01.560151 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8fcn8" podStartSLOduration=2.045031903 podStartE2EDuration="5.56013167s" podCreationTimestamp="2026-01-27 07:42:56 +0000 UTC" firstStartedPulling="2026-01-27 07:42:57.500899549 +0000 UTC m=+1590.096522125" lastFinishedPulling="2026-01-27 07:43:01.015999366 +0000 UTC m=+1593.611621892" observedRunningTime="2026-01-27 07:43:01.556623296 +0000 UTC m=+1594.152245842" watchObservedRunningTime="2026-01-27 07:43:01.56013167 +0000 UTC m=+1594.155754196" Jan 27 07:43:02 crc kubenswrapper[4764]: I0127 07:43:02.935304 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:43:02 crc kubenswrapper[4764]: I0127 07:43:02.935662 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hv88b" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="registry-server" containerID="cri-o://7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087" gracePeriod=2 Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.423743 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.544788 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.545624 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="registry-server" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.545648 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="registry-server" Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.545678 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="extract-content" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.545686 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="extract-content" Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.545713 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="extract-utilities" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.545722 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="extract-utilities" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.545902 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerName="registry-server" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.547257 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.560601 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.561996 4764 generic.go:334] "Generic (PLEG): container finished" podID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" containerID="7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087" exitCode=0 Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.562047 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerDied","Data":"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087"} Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.562078 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hv88b" event={"ID":"e4d2a63a-9bba-4f03-99da-aa1b31306e68","Type":"ContainerDied","Data":"557335e156197bc5160b2cd930a9e5ad1e99bd55f762ba8d005f6db65b89bc1a"} Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.562118 4764 scope.go:117] "RemoveContainer" containerID="7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.562281 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hv88b" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.588266 4764 scope.go:117] "RemoveContainer" containerID="1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.601520 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities\") pod \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602238 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content\") pod \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602384 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities" (OuterVolumeSpecName: "utilities") pod "e4d2a63a-9bba-4f03-99da-aa1b31306e68" (UID: "e4d2a63a-9bba-4f03-99da-aa1b31306e68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602404 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9w99\" (UniqueName: \"kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99\") pod \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\" (UID: \"e4d2a63a-9bba-4f03-99da-aa1b31306e68\") " Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602595 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnvr\" (UniqueName: \"kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602639 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602789 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.602885 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.609707 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99" (OuterVolumeSpecName: "kube-api-access-c9w99") pod "e4d2a63a-9bba-4f03-99da-aa1b31306e68" (UID: "e4d2a63a-9bba-4f03-99da-aa1b31306e68"). InnerVolumeSpecName "kube-api-access-c9w99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.613006 4764 scope.go:117] "RemoveContainer" containerID="2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.655423 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4d2a63a-9bba-4f03-99da-aa1b31306e68" (UID: "e4d2a63a-9bba-4f03-99da-aa1b31306e68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.696552 4764 scope.go:117] "RemoveContainer" containerID="7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087" Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.697126 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087\": container with ID starting with 7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087 not found: ID does not exist" containerID="7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.697171 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087"} err="failed to get container status \"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087\": rpc error: code = NotFound desc = could not find container \"7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087\": container with ID starting with 7288c60145a66d399dbb9185703fb5d6a6b34cf8dd1179ffebeb1d7ea7243087 not found: ID does not exist" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.697211 4764 scope.go:117] "RemoveContainer" containerID="1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999" Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.697618 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999\": container with ID starting with 1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999 not found: ID does not exist" containerID="1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.697662 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999"} err="failed to get container status \"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999\": rpc error: code = NotFound desc = could not find container \"1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999\": container with ID starting with 1a718382326252d111ea2c10ed48f36452ea305c048dab2b79b5a42330f07999 not found: ID does not exist" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.697689 4764 scope.go:117] "RemoveContainer" containerID="2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227" Jan 27 07:43:03 crc kubenswrapper[4764]: E0127 07:43:03.698141 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227\": container with ID starting with 2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227 not found: ID does not exist" containerID="2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.698167 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227"} err="failed to get container status \"2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227\": rpc error: code = NotFound desc = could not find container \"2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227\": container with ID starting with 2aeb0483b4a0c106413697aa1f340837fc0a068bb62d010de73fe24867420227 not found: ID does not exist" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.704493 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.704859 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.705022 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnvr\" (UniqueName: \"kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.705166 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d2a63a-9bba-4f03-99da-aa1b31306e68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.705262 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9w99\" (UniqueName: \"kubernetes.io/projected/e4d2a63a-9bba-4f03-99da-aa1b31306e68-kube-api-access-c9w99\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.705071 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.705311 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.726699 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnvr\" (UniqueName: \"kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr\") pod \"redhat-marketplace-w57tg\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.865866 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.906499 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:43:03 crc kubenswrapper[4764]: I0127 07:43:03.915979 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hv88b"] Jan 27 07:43:04 crc kubenswrapper[4764]: W0127 07:43:04.396985 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05e9c72a_5f9b_4187_b0e9_0e811a55a39c.slice/crio-dd57da7c619027d65153aea43e0708403dff3afcb21b43f416721c8b258ca9a2 WatchSource:0}: Error finding container dd57da7c619027d65153aea43e0708403dff3afcb21b43f416721c8b258ca9a2: Status 404 returned error can't find the container with id dd57da7c619027d65153aea43e0708403dff3afcb21b43f416721c8b258ca9a2 Jan 27 07:43:04 crc kubenswrapper[4764]: I0127 07:43:04.400777 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:04 crc kubenswrapper[4764]: I0127 07:43:04.452278 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d2a63a-9bba-4f03-99da-aa1b31306e68" path="/var/lib/kubelet/pods/e4d2a63a-9bba-4f03-99da-aa1b31306e68/volumes" Jan 27 07:43:04 crc kubenswrapper[4764]: I0127 07:43:04.574619 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerStarted","Data":"dd57da7c619027d65153aea43e0708403dff3afcb21b43f416721c8b258ca9a2"} Jan 27 07:43:05 crc kubenswrapper[4764]: I0127 07:43:05.586506 4764 generic.go:334] "Generic (PLEG): container finished" podID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerID="ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d" exitCode=0 Jan 27 07:43:05 crc kubenswrapper[4764]: I0127 07:43:05.586569 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerDied","Data":"ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d"} Jan 27 07:43:06 crc kubenswrapper[4764]: I0127 07:43:06.476533 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:06 crc kubenswrapper[4764]: I0127 07:43:06.476591 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:06 crc kubenswrapper[4764]: I0127 07:43:06.536271 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:06 crc kubenswrapper[4764]: I0127 07:43:06.656708 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:07 crc kubenswrapper[4764]: I0127 07:43:07.616774 4764 generic.go:334] "Generic (PLEG): container finished" podID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerID="11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf" exitCode=0 Jan 27 07:43:07 crc kubenswrapper[4764]: I0127 07:43:07.616898 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerDied","Data":"11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf"} Jan 27 07:43:08 crc kubenswrapper[4764]: I0127 07:43:08.628602 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerStarted","Data":"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e"} Jan 27 07:43:08 crc kubenswrapper[4764]: I0127 07:43:08.654357 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w57tg" podStartSLOduration=3.179318649 podStartE2EDuration="5.654336169s" podCreationTimestamp="2026-01-27 07:43:03 +0000 UTC" firstStartedPulling="2026-01-27 07:43:05.588833054 +0000 UTC m=+1598.184455580" lastFinishedPulling="2026-01-27 07:43:08.063850564 +0000 UTC m=+1600.659473100" observedRunningTime="2026-01-27 07:43:08.646383207 +0000 UTC m=+1601.242005743" watchObservedRunningTime="2026-01-27 07:43:08.654336169 +0000 UTC m=+1601.249958705" Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.340474 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.341009 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8fcn8" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="registry-server" containerID="cri-o://b6c07cbd731acdc3bca506cbc29dbf5e596858de629b32f9e2db36e302fda793" gracePeriod=2 Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.640226 4764 generic.go:334] "Generic (PLEG): container finished" podID="e928516c-88fd-422e-899f-97ea199f63a8" containerID="b6c07cbd731acdc3bca506cbc29dbf5e596858de629b32f9e2db36e302fda793" exitCode=0 Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.640322 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerDied","Data":"b6c07cbd731acdc3bca506cbc29dbf5e596858de629b32f9e2db36e302fda793"} Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.808855 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.922765 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content\") pod \"e928516c-88fd-422e-899f-97ea199f63a8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.923011 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities\") pod \"e928516c-88fd-422e-899f-97ea199f63a8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.923179 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfncv\" (UniqueName: \"kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv\") pod \"e928516c-88fd-422e-899f-97ea199f63a8\" (UID: \"e928516c-88fd-422e-899f-97ea199f63a8\") " Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.923911 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities" (OuterVolumeSpecName: "utilities") pod "e928516c-88fd-422e-899f-97ea199f63a8" (UID: "e928516c-88fd-422e-899f-97ea199f63a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.928674 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv" (OuterVolumeSpecName: "kube-api-access-bfncv") pod "e928516c-88fd-422e-899f-97ea199f63a8" (UID: "e928516c-88fd-422e-899f-97ea199f63a8"). InnerVolumeSpecName "kube-api-access-bfncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:43:09 crc kubenswrapper[4764]: I0127 07:43:09.972355 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e928516c-88fd-422e-899f-97ea199f63a8" (UID: "e928516c-88fd-422e-899f-97ea199f63a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.025325 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfncv\" (UniqueName: \"kubernetes.io/projected/e928516c-88fd-422e-899f-97ea199f63a8-kube-api-access-bfncv\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.025366 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.025379 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e928516c-88fd-422e-899f-97ea199f63a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.651121 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fcn8" event={"ID":"e928516c-88fd-422e-899f-97ea199f63a8","Type":"ContainerDied","Data":"2bc540e96138b7e63453e780ff1e51f1108771254205e8505cf49798a1da0291"} Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.651177 4764 scope.go:117] "RemoveContainer" containerID="b6c07cbd731acdc3bca506cbc29dbf5e596858de629b32f9e2db36e302fda793" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.651192 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fcn8" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.677814 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.679804 4764 scope.go:117] "RemoveContainer" containerID="ff1a5e2752b20985dd12e9a6d97e2d36161f94a57bf177a1d5d4c07c02ad21b8" Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.687299 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8fcn8"] Jan 27 07:43:10 crc kubenswrapper[4764]: I0127 07:43:10.703551 4764 scope.go:117] "RemoveContainer" containerID="75e90b19bc20b6c372b6bfef0c798d8cc275bd02639fc0e99399c47eef2854f2" Jan 27 07:43:12 crc kubenswrapper[4764]: I0127 07:43:12.451075 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e928516c-88fd-422e-899f-97ea199f63a8" path="/var/lib/kubelet/pods/e928516c-88fd-422e-899f-97ea199f63a8/volumes" Jan 27 07:43:13 crc kubenswrapper[4764]: I0127 07:43:13.866698 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:13 crc kubenswrapper[4764]: I0127 07:43:13.866967 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:13 crc kubenswrapper[4764]: I0127 07:43:13.925583 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:15 crc kubenswrapper[4764]: I0127 07:43:15.403042 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:15 crc kubenswrapper[4764]: I0127 07:43:15.475896 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.367836 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w57tg" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="registry-server" containerID="cri-o://ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e" gracePeriod=2 Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.895998 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.985065 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities\") pod \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.985109 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content\") pod \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.985165 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxnvr\" (UniqueName: \"kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr\") pod \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\" (UID: \"05e9c72a-5f9b-4187-b0e9-0e811a55a39c\") " Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.986216 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities" (OuterVolumeSpecName: "utilities") pod "05e9c72a-5f9b-4187-b0e9-0e811a55a39c" (UID: "05e9c72a-5f9b-4187-b0e9-0e811a55a39c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:17 crc kubenswrapper[4764]: I0127 07:43:17.991755 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr" (OuterVolumeSpecName: "kube-api-access-hxnvr") pod "05e9c72a-5f9b-4187-b0e9-0e811a55a39c" (UID: "05e9c72a-5f9b-4187-b0e9-0e811a55a39c"). InnerVolumeSpecName "kube-api-access-hxnvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.084468 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05e9c72a-5f9b-4187-b0e9-0e811a55a39c" (UID: "05e9c72a-5f9b-4187-b0e9-0e811a55a39c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.087885 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxnvr\" (UniqueName: \"kubernetes.io/projected/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-kube-api-access-hxnvr\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.087938 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.087955 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e9c72a-5f9b-4187-b0e9-0e811a55a39c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.378607 4764 generic.go:334] "Generic (PLEG): container finished" podID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerID="ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e" exitCode=0 Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.378661 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerDied","Data":"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e"} Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.378667 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w57tg" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.378705 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w57tg" event={"ID":"05e9c72a-5f9b-4187-b0e9-0e811a55a39c","Type":"ContainerDied","Data":"dd57da7c619027d65153aea43e0708403dff3afcb21b43f416721c8b258ca9a2"} Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.378734 4764 scope.go:117] "RemoveContainer" containerID="ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.420026 4764 scope.go:117] "RemoveContainer" containerID="11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.427193 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.456193 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w57tg"] Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.457228 4764 scope.go:117] "RemoveContainer" containerID="ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.497505 4764 scope.go:117] "RemoveContainer" containerID="ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e" Jan 27 07:43:18 crc kubenswrapper[4764]: E0127 07:43:18.497973 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e\": container with ID starting with ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e not found: ID does not exist" containerID="ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.498018 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e"} err="failed to get container status \"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e\": rpc error: code = NotFound desc = could not find container \"ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e\": container with ID starting with ade1442304738ff392ddfb4c731b34791e300a50a8d566b2c86281a85b5ccd7e not found: ID does not exist" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.498045 4764 scope.go:117] "RemoveContainer" containerID="11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf" Jan 27 07:43:18 crc kubenswrapper[4764]: E0127 07:43:18.498781 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf\": container with ID starting with 11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf not found: ID does not exist" containerID="11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.498871 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf"} err="failed to get container status \"11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf\": rpc error: code = NotFound desc = could not find container \"11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf\": container with ID starting with 11dfc48ad30023f5409ca826f01214ed169d80c8933e882bae1647379eb0eebf not found: ID does not exist" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.498930 4764 scope.go:117] "RemoveContainer" containerID="ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d" Jan 27 07:43:18 crc kubenswrapper[4764]: E0127 07:43:18.499365 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d\": container with ID starting with ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d not found: ID does not exist" containerID="ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d" Jan 27 07:43:18 crc kubenswrapper[4764]: I0127 07:43:18.499402 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d"} err="failed to get container status \"ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d\": rpc error: code = NotFound desc = could not find container \"ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d\": container with ID starting with ec4a88f46b7cb97e09054fad1fb2d84252a0c9169e3edeb026580ffdf5ffb13d not found: ID does not exist" Jan 27 07:43:20 crc kubenswrapper[4764]: I0127 07:43:20.456946 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" path="/var/lib/kubelet/pods/05e9c72a-5f9b-4187-b0e9-0e811a55a39c/volumes" Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.065969 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nbzj6"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.079452 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mc8vn"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.087811 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fe0-account-create-update-xjp2h"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.095519 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6dltf"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.103114 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9a77-account-create-update-ql2vm"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.110204 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nbzj6"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.117403 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mc8vn"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.125900 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6dltf"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.134066 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7fe0-account-create-update-xjp2h"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.143183 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-be4a-account-create-update-kqxnz"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.154078 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9a77-account-create-update-ql2vm"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.165090 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-be4a-account-create-update-kqxnz"] Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.762397 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:43:23 crc kubenswrapper[4764]: I0127 07:43:23.762519 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.451020 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf" path="/var/lib/kubelet/pods/0d71a0a7-8c6a-4a81-81f9-be5d0ce20faf/volumes" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.452475 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d04865-44ab-4be1-be5d-01347984e03f" path="/var/lib/kubelet/pods/32d04865-44ab-4be1-be5d-01347984e03f/volumes" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.453709 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="720c25d4-64d8-44fe-8d52-fa50f8fc3b2d" path="/var/lib/kubelet/pods/720c25d4-64d8-44fe-8d52-fa50f8fc3b2d/volumes" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.454821 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea70529-f5f1-4fce-b115-4f3274e995a5" path="/var/lib/kubelet/pods/9ea70529-f5f1-4fce-b115-4f3274e995a5/volumes" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.456894 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c96778-194f-4ad0-bd79-5a60e60f70f6" path="/var/lib/kubelet/pods/a5c96778-194f-4ad0-bd79-5a60e60f70f6/volumes" Jan 27 07:43:24 crc kubenswrapper[4764]: I0127 07:43:24.458150 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a48640-b2e1-4f03-8579-e4983204deb9" path="/var/lib/kubelet/pods/c1a48640-b2e1-4f03-8579-e4983204deb9/volumes" Jan 27 07:43:31 crc kubenswrapper[4764]: I0127 07:43:31.053679 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dh7v6"] Jan 27 07:43:31 crc kubenswrapper[4764]: I0127 07:43:31.066308 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dh7v6"] Jan 27 07:43:32 crc kubenswrapper[4764]: I0127 07:43:32.449218 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe7d723-e508-4c6a-ab0f-da07d92ed627" path="/var/lib/kubelet/pods/9fe7d723-e508-4c6a-ab0f-da07d92ed627/volumes" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.472420 4764 scope.go:117] "RemoveContainer" containerID="6302e6e7cfa39fdff1499828afbd77a948fc011a4af517d28252f930049ce663" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.505553 4764 scope.go:117] "RemoveContainer" containerID="88a537c7298f4e11c8a5130a7c8d7a8af1cf483855796556c0eefbc2250dd181" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.561672 4764 scope.go:117] "RemoveContainer" containerID="ae23932d8de22d9b3b0168b1ec7a28e2a87a1de58419ab9adc728780282e9581" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.611540 4764 scope.go:117] "RemoveContainer" containerID="29354f239bda28fff6f9e9d68056fa04510a540ebc6ebe9da3836911214ce4c1" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.644020 4764 scope.go:117] "RemoveContainer" containerID="01a037e4ea618811c59c83eb27e1982033b5f300ad25748a2a387f0ee7be1b09" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.689583 4764 scope.go:117] "RemoveContainer" containerID="439ca86c334cafa6514b8cec21ddca6c03188ea71ef5316f713618c907142052" Jan 27 07:43:41 crc kubenswrapper[4764]: I0127 07:43:41.767841 4764 scope.go:117] "RemoveContainer" containerID="6184b5a48b3a7306f15bfeebc9a9d2b5669c15662b11a46c1492e740596778ba" Jan 27 07:43:47 crc kubenswrapper[4764]: I0127 07:43:47.048144 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qln9d"] Jan 27 07:43:47 crc kubenswrapper[4764]: I0127 07:43:47.061605 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qln9d"] Jan 27 07:43:48 crc kubenswrapper[4764]: I0127 07:43:48.451663 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac97941-f22c-4599-9674-eeebb1347a85" path="/var/lib/kubelet/pods/2ac97941-f22c-4599-9674-eeebb1347a85/volumes" Jan 27 07:43:51 crc kubenswrapper[4764]: I0127 07:43:51.036493 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-f57wb"] Jan 27 07:43:51 crc kubenswrapper[4764]: I0127 07:43:51.053719 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-f57wb"] Jan 27 07:43:52 crc kubenswrapper[4764]: I0127 07:43:52.451832 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fde588f-3a41-400e-9dd9-42ffb66989db" path="/var/lib/kubelet/pods/8fde588f-3a41-400e-9dd9-42ffb66989db/volumes" Jan 27 07:43:53 crc kubenswrapper[4764]: I0127 07:43:53.762651 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:43:53 crc kubenswrapper[4764]: I0127 07:43:53.763061 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.053323 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-232f-account-create-update-lzptk"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.069364 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cwtw5"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.079650 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9636-account-create-update-lj4pb"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.087395 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-232f-account-create-update-lzptk"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.094666 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1f49-account-create-update-hdh2b"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.102425 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9636-account-create-update-lj4pb"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.110473 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1f49-account-create-update-hdh2b"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.117753 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cfvvc"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.124803 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cfvvc"] Jan 27 07:43:55 crc kubenswrapper[4764]: I0127 07:43:55.131939 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cwtw5"] Jan 27 07:43:56 crc kubenswrapper[4764]: I0127 07:43:56.449307 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f98420-ea6f-40cb-b274-0bf3b3282252" path="/var/lib/kubelet/pods/13f98420-ea6f-40cb-b274-0bf3b3282252/volumes" Jan 27 07:43:56 crc kubenswrapper[4764]: I0127 07:43:56.449960 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a2109a-990b-4435-a62e-4b4ca7d52c1e" path="/var/lib/kubelet/pods/23a2109a-990b-4435-a62e-4b4ca7d52c1e/volumes" Jan 27 07:43:56 crc kubenswrapper[4764]: I0127 07:43:56.450568 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7123da-ea98-40d7-bed5-0cdbafc74ca1" path="/var/lib/kubelet/pods/4a7123da-ea98-40d7-bed5-0cdbafc74ca1/volumes" Jan 27 07:43:56 crc kubenswrapper[4764]: I0127 07:43:56.451110 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c458dd5-a975-4d75-83f4-c58184f63ab2" path="/var/lib/kubelet/pods/8c458dd5-a975-4d75-83f4-c58184f63ab2/volumes" Jan 27 07:43:56 crc kubenswrapper[4764]: I0127 07:43:56.452214 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e76902-0634-44b5-bb6b-0cdf63efaf87" path="/var/lib/kubelet/pods/f3e76902-0634-44b5-bb6b-0cdf63efaf87/volumes" Jan 27 07:43:59 crc kubenswrapper[4764]: I0127 07:43:59.055387 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zgzzx"] Jan 27 07:43:59 crc kubenswrapper[4764]: I0127 07:43:59.062898 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zgzzx"] Jan 27 07:44:00 crc kubenswrapper[4764]: I0127 07:44:00.457818 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1070498a-e8fe-43a6-b6d3-4a2862f24fee" path="/var/lib/kubelet/pods/1070498a-e8fe-43a6-b6d3-4a2862f24fee/volumes" Jan 27 07:44:23 crc kubenswrapper[4764]: I0127 07:44:23.762505 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:44:23 crc kubenswrapper[4764]: I0127 07:44:23.763238 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:44:23 crc kubenswrapper[4764]: I0127 07:44:23.763301 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:44:23 crc kubenswrapper[4764]: I0127 07:44:23.764358 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:44:23 crc kubenswrapper[4764]: I0127 07:44:23.764488 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" gracePeriod=600 Jan 27 07:44:23 crc kubenswrapper[4764]: E0127 07:44:23.893887 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:44:24 crc kubenswrapper[4764]: I0127 07:44:24.080025 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" exitCode=0 Jan 27 07:44:24 crc kubenswrapper[4764]: I0127 07:44:24.080064 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6"} Jan 27 07:44:24 crc kubenswrapper[4764]: I0127 07:44:24.080094 4764 scope.go:117] "RemoveContainer" containerID="b810d312218001b771dbac4e138fc3c15bb0fd651ad5e5238d35ccb0d85c52f4" Jan 27 07:44:24 crc kubenswrapper[4764]: I0127 07:44:24.080637 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:44:24 crc kubenswrapper[4764]: E0127 07:44:24.080951 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:44:31 crc kubenswrapper[4764]: I0127 07:44:31.053347 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-g7f7g"] Jan 27 07:44:31 crc kubenswrapper[4764]: I0127 07:44:31.066170 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-g7f7g"] Jan 27 07:44:32 crc kubenswrapper[4764]: I0127 07:44:32.460689 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e6ef61-e6b4-4719-ae71-1983696d2d69" path="/var/lib/kubelet/pods/b3e6ef61-e6b4-4719-ae71-1983696d2d69/volumes" Jan 27 07:44:35 crc kubenswrapper[4764]: I0127 07:44:35.045014 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dtl6s"] Jan 27 07:44:35 crc kubenswrapper[4764]: I0127 07:44:35.062206 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dtl6s"] Jan 27 07:44:36 crc kubenswrapper[4764]: I0127 07:44:36.440112 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:44:36 crc kubenswrapper[4764]: E0127 07:44:36.440630 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:44:36 crc kubenswrapper[4764]: I0127 07:44:36.459060 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77beac3a-985b-45d4-b804-ff2926d7ab7d" path="/var/lib/kubelet/pods/77beac3a-985b-45d4-b804-ff2926d7ab7d/volumes" Jan 27 07:44:40 crc kubenswrapper[4764]: I0127 07:44:40.049908 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cz896"] Jan 27 07:44:40 crc kubenswrapper[4764]: I0127 07:44:40.063473 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cz896"] Jan 27 07:44:40 crc kubenswrapper[4764]: I0127 07:44:40.457504 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e2f4c-b077-4a04-8edb-4d169e42964e" path="/var/lib/kubelet/pods/ed1e2f4c-b077-4a04-8edb-4d169e42964e/volumes" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.021116 4764 scope.go:117] "RemoveContainer" containerID="1bffcb01def431b5e658cd2aaed8ddad47711933995b6aa74f7af7ddb1d57bd7" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.071851 4764 scope.go:117] "RemoveContainer" containerID="3a01c9c23b9dea86eb21b397da6cf942c2bb5e2b4853f1e14a76f623554841c1" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.144291 4764 scope.go:117] "RemoveContainer" containerID="e4acb1c925b44d9763be14d6bc219c5fd882d01ab50b508b7a3944c82cddaacd" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.240772 4764 scope.go:117] "RemoveContainer" containerID="1b0c43f340897be7ab9cd3bb916d6490422d39699247c8ee11e6c854c3d10130" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.302542 4764 scope.go:117] "RemoveContainer" containerID="032f9b05639300c211df9ab39967f53ea3beaa71a440612d1afd5cd03f76ebad" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.352870 4764 scope.go:117] "RemoveContainer" containerID="f3c2b34a37faf07ff247bb50e686df1a1c83b87236b0c227ca075403ef9ef1b8" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.372133 4764 scope.go:117] "RemoveContainer" containerID="1e947509ede9b58e3c4e2c08675c499070d87eb94f1529dbb8060d270d2837d2" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.396256 4764 scope.go:117] "RemoveContainer" containerID="2947ec70b8ed8382f839405bddbd121b048a970273e77ec5e7c8b5751ece6bb3" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.439142 4764 scope.go:117] "RemoveContainer" containerID="0144b02fd2128fdb0c9f359ba7e805cecf2120b82774c5b231364d5222b4cd80" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.473301 4764 scope.go:117] "RemoveContainer" containerID="66024f67d7623815f69a30dbdc79809a5cd0a723f80f6594d3405443669adc91" Jan 27 07:44:42 crc kubenswrapper[4764]: I0127 07:44:42.501689 4764 scope.go:117] "RemoveContainer" containerID="549c8af9d6b505397ae223e136c1c4660908ad4f53bc194b052646ff4282784e" Jan 27 07:44:47 crc kubenswrapper[4764]: I0127 07:44:47.438411 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:44:47 crc kubenswrapper[4764]: E0127 07:44:47.439426 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:44:50 crc kubenswrapper[4764]: I0127 07:44:50.044076 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4tp44"] Jan 27 07:44:50 crc kubenswrapper[4764]: I0127 07:44:50.058461 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4tp44"] Jan 27 07:44:50 crc kubenswrapper[4764]: I0127 07:44:50.458418 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfdc388-3353-43e2-99b0-7c6e17fb78f9" path="/var/lib/kubelet/pods/7cfdc388-3353-43e2-99b0-7c6e17fb78f9/volumes" Jan 27 07:44:53 crc kubenswrapper[4764]: I0127 07:44:53.032790 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7875w"] Jan 27 07:44:53 crc kubenswrapper[4764]: I0127 07:44:53.041385 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7875w"] Jan 27 07:44:54 crc kubenswrapper[4764]: I0127 07:44:54.457020 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c3a86b-6e48-4aa4-950e-d8ecf643cf48" path="/var/lib/kubelet/pods/64c3a86b-6e48-4aa4-950e-d8ecf643cf48/volumes" Jan 27 07:44:57 crc kubenswrapper[4764]: I0127 07:44:57.440862 4764 generic.go:334] "Generic (PLEG): container finished" podID="8b37bfd5-b31b-489d-a973-ffeeb769660c" containerID="fb92aac4bc950ee30903f0b0d0f7ba2f67c66cc0e77174a1ad4fe9bfdb746f3c" exitCode=0 Jan 27 07:44:57 crc kubenswrapper[4764]: I0127 07:44:57.440921 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" event={"ID":"8b37bfd5-b31b-489d-a973-ffeeb769660c","Type":"ContainerDied","Data":"fb92aac4bc950ee30903f0b0d0f7ba2f67c66cc0e77174a1ad4fe9bfdb746f3c"} Jan 27 07:44:58 crc kubenswrapper[4764]: I0127 07:44:58.936059 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.074295 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam\") pod \"8b37bfd5-b31b-489d-a973-ffeeb769660c\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.074361 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfvvs\" (UniqueName: \"kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs\") pod \"8b37bfd5-b31b-489d-a973-ffeeb769660c\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.074395 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory\") pod \"8b37bfd5-b31b-489d-a973-ffeeb769660c\" (UID: \"8b37bfd5-b31b-489d-a973-ffeeb769660c\") " Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.080872 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs" (OuterVolumeSpecName: "kube-api-access-rfvvs") pod "8b37bfd5-b31b-489d-a973-ffeeb769660c" (UID: "8b37bfd5-b31b-489d-a973-ffeeb769660c"). InnerVolumeSpecName "kube-api-access-rfvvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.106387 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory" (OuterVolumeSpecName: "inventory") pod "8b37bfd5-b31b-489d-a973-ffeeb769660c" (UID: "8b37bfd5-b31b-489d-a973-ffeeb769660c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.107363 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b37bfd5-b31b-489d-a973-ffeeb769660c" (UID: "8b37bfd5-b31b-489d-a973-ffeeb769660c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.176646 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.176684 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfvvs\" (UniqueName: \"kubernetes.io/projected/8b37bfd5-b31b-489d-a973-ffeeb769660c-kube-api-access-rfvvs\") on node \"crc\" DevicePath \"\"" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.176693 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b37bfd5-b31b-489d-a973-ffeeb769660c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.466738 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" event={"ID":"8b37bfd5-b31b-489d-a973-ffeeb769660c","Type":"ContainerDied","Data":"f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de"} Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.466802 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7a380e20db196237fbd2a2f799cbcd41af5fc0a804917785f03f5e3dac284de" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.466881 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558181 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl"] Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558574 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="extract-content" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558594 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="extract-content" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558606 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b37bfd5-b31b-489d-a973-ffeeb769660c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558613 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b37bfd5-b31b-489d-a973-ffeeb769660c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558629 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="extract-content" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558635 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="extract-content" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558645 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="extract-utilities" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558651 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="extract-utilities" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558668 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558674 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558685 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="extract-utilities" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558691 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="extract-utilities" Jan 27 07:44:59 crc kubenswrapper[4764]: E0127 07:44:59.558710 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558715 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558895 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="e928516c-88fd-422e-899f-97ea199f63a8" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558907 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e9c72a-5f9b-4187-b0e9-0e811a55a39c" containerName="registry-server" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.558939 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b37bfd5-b31b-489d-a973-ffeeb769660c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.559641 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.561156 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.562390 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.562855 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.563031 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.569907 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl"] Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.691519 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.691585 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.691933 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgn2\" (UniqueName: \"kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.794726 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.794820 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.794939 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgn2\" (UniqueName: \"kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.800837 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.802295 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.824712 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgn2\" (UniqueName: \"kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:44:59 crc kubenswrapper[4764]: I0127 07:44:59.887155 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.141364 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8"] Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.143134 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.148261 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.148582 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.151380 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8"] Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.212809 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.213307 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.213811 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmz2\" (UniqueName: \"kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.315021 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.315077 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.315251 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmz2\" (UniqueName: \"kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.316866 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.324980 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.341316 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmz2\" (UniqueName: \"kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2\") pod \"collect-profiles-29491665-qwkv8\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.401315 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl"] Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.469573 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.480075 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" event={"ID":"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb","Type":"ContainerStarted","Data":"e3dfdef73e796664e9cbc93935c186c892db8a5579741f8e7bd673c684751b76"} Jan 27 07:45:00 crc kubenswrapper[4764]: I0127 07:45:00.898010 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8"] Jan 27 07:45:01 crc kubenswrapper[4764]: I0127 07:45:01.438455 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:45:01 crc kubenswrapper[4764]: E0127 07:45:01.439147 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:45:01 crc kubenswrapper[4764]: I0127 07:45:01.490667 4764 generic.go:334] "Generic (PLEG): container finished" podID="fdfd0e7a-0b46-423a-9759-9230867aaddb" containerID="cd46cbf76b83dd264d3732ca35d757cb532c4380a75563f034fa5cef0e0e7a4f" exitCode=0 Jan 27 07:45:01 crc kubenswrapper[4764]: I0127 07:45:01.490762 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" event={"ID":"fdfd0e7a-0b46-423a-9759-9230867aaddb","Type":"ContainerDied","Data":"cd46cbf76b83dd264d3732ca35d757cb532c4380a75563f034fa5cef0e0e7a4f"} Jan 27 07:45:01 crc kubenswrapper[4764]: I0127 07:45:01.490801 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" event={"ID":"fdfd0e7a-0b46-423a-9759-9230867aaddb","Type":"ContainerStarted","Data":"de0d2b6af4d290e3f4ce747cc23f0ba62be8be033dad90b68d3f024fe372f381"} Jan 27 07:45:01 crc kubenswrapper[4764]: I0127 07:45:01.493228 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" event={"ID":"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb","Type":"ContainerStarted","Data":"a011c70dfad820d7d4356c8a594eeffd5d3c9aec89adf9652e16cf9588a5771b"} Jan 27 07:45:02 crc kubenswrapper[4764]: I0127 07:45:02.520761 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" podStartSLOduration=2.690300435 podStartE2EDuration="3.520741538s" podCreationTimestamp="2026-01-27 07:44:59 +0000 UTC" firstStartedPulling="2026-01-27 07:45:00.403767887 +0000 UTC m=+1712.999390413" lastFinishedPulling="2026-01-27 07:45:01.23420896 +0000 UTC m=+1713.829831516" observedRunningTime="2026-01-27 07:45:02.516634396 +0000 UTC m=+1715.112256922" watchObservedRunningTime="2026-01-27 07:45:02.520741538 +0000 UTC m=+1715.116364064" Jan 27 07:45:02 crc kubenswrapper[4764]: I0127 07:45:02.883208 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:02 crc kubenswrapper[4764]: I0127 07:45:02.969404 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume\") pod \"fdfd0e7a-0b46-423a-9759-9230867aaddb\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " Jan 27 07:45:02 crc kubenswrapper[4764]: I0127 07:45:02.970580 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume" (OuterVolumeSpecName: "config-volume") pod "fdfd0e7a-0b46-423a-9759-9230867aaddb" (UID: "fdfd0e7a-0b46-423a-9759-9230867aaddb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.071259 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmz2\" (UniqueName: \"kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2\") pod \"fdfd0e7a-0b46-423a-9759-9230867aaddb\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.071582 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume\") pod \"fdfd0e7a-0b46-423a-9759-9230867aaddb\" (UID: \"fdfd0e7a-0b46-423a-9759-9230867aaddb\") " Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.072096 4764 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfd0e7a-0b46-423a-9759-9230867aaddb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.076622 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fdfd0e7a-0b46-423a-9759-9230867aaddb" (UID: "fdfd0e7a-0b46-423a-9759-9230867aaddb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.079296 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2" (OuterVolumeSpecName: "kube-api-access-6tmz2") pod "fdfd0e7a-0b46-423a-9759-9230867aaddb" (UID: "fdfd0e7a-0b46-423a-9759-9230867aaddb"). InnerVolumeSpecName "kube-api-access-6tmz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.173958 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmz2\" (UniqueName: \"kubernetes.io/projected/fdfd0e7a-0b46-423a-9759-9230867aaddb-kube-api-access-6tmz2\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.174018 4764 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfd0e7a-0b46-423a-9759-9230867aaddb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.522663 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" event={"ID":"fdfd0e7a-0b46-423a-9759-9230867aaddb","Type":"ContainerDied","Data":"de0d2b6af4d290e3f4ce747cc23f0ba62be8be033dad90b68d3f024fe372f381"} Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.522721 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0d2b6af4d290e3f4ce747cc23f0ba62be8be033dad90b68d3f024fe372f381" Jan 27 07:45:03 crc kubenswrapper[4764]: I0127 07:45:03.523722 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-qwkv8" Jan 27 07:45:14 crc kubenswrapper[4764]: I0127 07:45:14.438721 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:45:14 crc kubenswrapper[4764]: E0127 07:45:14.439749 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:45:27 crc kubenswrapper[4764]: I0127 07:45:27.439598 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:45:27 crc kubenswrapper[4764]: E0127 07:45:27.440713 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:45:41 crc kubenswrapper[4764]: I0127 07:45:41.438709 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:45:41 crc kubenswrapper[4764]: E0127 07:45:41.439525 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:45:42 crc kubenswrapper[4764]: I0127 07:45:42.766581 4764 scope.go:117] "RemoveContainer" containerID="6839d75e46748a65e0f30ead1d46207337d62e682f8ba4f686b0791f6c3d7132" Jan 27 07:45:42 crc kubenswrapper[4764]: I0127 07:45:42.806341 4764 scope.go:117] "RemoveContainer" containerID="3b14b95d8e8a03d5cdbc55e03ce8bd960f1670c16840466cddcb5a316eb64d92" Jan 27 07:45:52 crc kubenswrapper[4764]: I0127 07:45:52.438463 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:45:52 crc kubenswrapper[4764]: E0127 07:45:52.439370 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.045329 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-817e-account-create-update-8qzmw"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.054644 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-h5f8x"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.069465 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2465-account-create-update-5r9v9"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.079325 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-h5f8x"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.086579 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-817e-account-create-update-8qzmw"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.093734 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7c88d"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.101037 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2465-account-create-update-5r9v9"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.110029 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7c88d"] Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.449637 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37677746-23d4-4650-bdc6-7dfe211b54d7" path="/var/lib/kubelet/pods/37677746-23d4-4650-bdc6-7dfe211b54d7/volumes" Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.450530 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be92c208-84c9-417a-9a1b-857fc9d3e8fd" path="/var/lib/kubelet/pods/be92c208-84c9-417a-9a1b-857fc9d3e8fd/volumes" Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.451092 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea81fb26-d728-412a-a742-7c589114ce99" path="/var/lib/kubelet/pods/ea81fb26-d728-412a-a742-7c589114ce99/volumes" Jan 27 07:45:54 crc kubenswrapper[4764]: I0127 07:45:54.451603 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2fed93-c0e7-48ee-9623-6e931a46122e" path="/var/lib/kubelet/pods/fb2fed93-c0e7-48ee-9623-6e931a46122e/volumes" Jan 27 07:45:55 crc kubenswrapper[4764]: I0127 07:45:55.049341 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bb43-account-create-update-4mxhh"] Jan 27 07:45:55 crc kubenswrapper[4764]: I0127 07:45:55.064845 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rmd8x"] Jan 27 07:45:55 crc kubenswrapper[4764]: I0127 07:45:55.074404 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rmd8x"] Jan 27 07:45:55 crc kubenswrapper[4764]: I0127 07:45:55.084586 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bb43-account-create-update-4mxhh"] Jan 27 07:45:56 crc kubenswrapper[4764]: I0127 07:45:56.451633 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550fe574-e06c-47c1-89a7-8e4e356c5601" path="/var/lib/kubelet/pods/550fe574-e06c-47c1-89a7-8e4e356c5601/volumes" Jan 27 07:45:56 crc kubenswrapper[4764]: I0127 07:45:56.452414 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1" path="/var/lib/kubelet/pods/c24ec6ac-1010-43f6-ada1-8aca2e6ebdb1/volumes" Jan 27 07:46:04 crc kubenswrapper[4764]: I0127 07:46:04.438430 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:46:04 crc kubenswrapper[4764]: E0127 07:46:04.439046 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:46:10 crc kubenswrapper[4764]: I0127 07:46:10.140126 4764 generic.go:334] "Generic (PLEG): container finished" podID="6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" containerID="a011c70dfad820d7d4356c8a594eeffd5d3c9aec89adf9652e16cf9588a5771b" exitCode=0 Jan 27 07:46:10 crc kubenswrapper[4764]: I0127 07:46:10.140209 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" event={"ID":"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb","Type":"ContainerDied","Data":"a011c70dfad820d7d4356c8a594eeffd5d3c9aec89adf9652e16cf9588a5771b"} Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.535640 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.637759 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgn2\" (UniqueName: \"kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2\") pod \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.638242 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam\") pod \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.638509 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory\") pod \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\" (UID: \"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb\") " Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.652812 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2" (OuterVolumeSpecName: "kube-api-access-rdgn2") pod "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" (UID: "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb"). InnerVolumeSpecName "kube-api-access-rdgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.674776 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" (UID: "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.678613 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory" (OuterVolumeSpecName: "inventory") pod "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" (UID: "6eb419d4-1c38-4da9-95b7-0dd6ce308bdb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.752765 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.752820 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:11 crc kubenswrapper[4764]: I0127 07:46:11.752832 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgn2\" (UniqueName: \"kubernetes.io/projected/6eb419d4-1c38-4da9-95b7-0dd6ce308bdb-kube-api-access-rdgn2\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.165891 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" event={"ID":"6eb419d4-1c38-4da9-95b7-0dd6ce308bdb","Type":"ContainerDied","Data":"e3dfdef73e796664e9cbc93935c186c892db8a5579741f8e7bd673c684751b76"} Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.165956 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3dfdef73e796664e9cbc93935c186c892db8a5579741f8e7bd673c684751b76" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.165995 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.275551 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb"] Jan 27 07:46:12 crc kubenswrapper[4764]: E0127 07:46:12.276053 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.276068 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:12 crc kubenswrapper[4764]: E0127 07:46:12.276086 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd0e7a-0b46-423a-9759-9230867aaddb" containerName="collect-profiles" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.276093 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd0e7a-0b46-423a-9759-9230867aaddb" containerName="collect-profiles" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.276297 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfd0e7a-0b46-423a-9759-9230867aaddb" containerName="collect-profiles" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.276321 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb419d4-1c38-4da9-95b7-0dd6ce308bdb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.277002 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.279755 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.280001 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.280286 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.280648 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.285303 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb"] Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.368946 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.369407 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.369506 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqvrn\" (UniqueName: \"kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.471744 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.471802 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqvrn\" (UniqueName: \"kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.471854 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.478577 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.480038 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.495743 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqvrn\" (UniqueName: \"kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:12 crc kubenswrapper[4764]: I0127 07:46:12.609685 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:13 crc kubenswrapper[4764]: I0127 07:46:13.219291 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb"] Jan 27 07:46:14 crc kubenswrapper[4764]: I0127 07:46:14.182620 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" event={"ID":"405eb05b-d23b-4ef5-b1bf-617c22a27767","Type":"ContainerStarted","Data":"55031b03004b698bf86db8afda16bb34e88e5a8a77abcc3271cfe384451f8222"} Jan 27 07:46:14 crc kubenswrapper[4764]: I0127 07:46:14.182980 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" event={"ID":"405eb05b-d23b-4ef5-b1bf-617c22a27767","Type":"ContainerStarted","Data":"49e7f664c799939d8bdced27171b1fe3b55355eb0cba5f2a3ba3ac2d09f0a838"} Jan 27 07:46:14 crc kubenswrapper[4764]: I0127 07:46:14.202608 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" podStartSLOduration=1.6563708529999999 podStartE2EDuration="2.202586184s" podCreationTimestamp="2026-01-27 07:46:12 +0000 UTC" firstStartedPulling="2026-01-27 07:46:13.23139624 +0000 UTC m=+1785.827018766" lastFinishedPulling="2026-01-27 07:46:13.777611551 +0000 UTC m=+1786.373234097" observedRunningTime="2026-01-27 07:46:14.199368896 +0000 UTC m=+1786.794991452" watchObservedRunningTime="2026-01-27 07:46:14.202586184 +0000 UTC m=+1786.798208720" Jan 27 07:46:18 crc kubenswrapper[4764]: I0127 07:46:18.447770 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:46:18 crc kubenswrapper[4764]: E0127 07:46:18.448785 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:46:19 crc kubenswrapper[4764]: I0127 07:46:19.227588 4764 generic.go:334] "Generic (PLEG): container finished" podID="405eb05b-d23b-4ef5-b1bf-617c22a27767" containerID="55031b03004b698bf86db8afda16bb34e88e5a8a77abcc3271cfe384451f8222" exitCode=0 Jan 27 07:46:19 crc kubenswrapper[4764]: I0127 07:46:19.227629 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" event={"ID":"405eb05b-d23b-4ef5-b1bf-617c22a27767","Type":"ContainerDied","Data":"55031b03004b698bf86db8afda16bb34e88e5a8a77abcc3271cfe384451f8222"} Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.697471 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.891124 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam\") pod \"405eb05b-d23b-4ef5-b1bf-617c22a27767\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.891219 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory\") pod \"405eb05b-d23b-4ef5-b1bf-617c22a27767\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.891326 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqvrn\" (UniqueName: \"kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn\") pod \"405eb05b-d23b-4ef5-b1bf-617c22a27767\" (UID: \"405eb05b-d23b-4ef5-b1bf-617c22a27767\") " Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.903697 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn" (OuterVolumeSpecName: "kube-api-access-wqvrn") pod "405eb05b-d23b-4ef5-b1bf-617c22a27767" (UID: "405eb05b-d23b-4ef5-b1bf-617c22a27767"). InnerVolumeSpecName "kube-api-access-wqvrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.940570 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "405eb05b-d23b-4ef5-b1bf-617c22a27767" (UID: "405eb05b-d23b-4ef5-b1bf-617c22a27767"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.950994 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory" (OuterVolumeSpecName: "inventory") pod "405eb05b-d23b-4ef5-b1bf-617c22a27767" (UID: "405eb05b-d23b-4ef5-b1bf-617c22a27767"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.994066 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqvrn\" (UniqueName: \"kubernetes.io/projected/405eb05b-d23b-4ef5-b1bf-617c22a27767-kube-api-access-wqvrn\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.994299 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:20 crc kubenswrapper[4764]: I0127 07:46:20.994328 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405eb05b-d23b-4ef5-b1bf-617c22a27767-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.252045 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" event={"ID":"405eb05b-d23b-4ef5-b1bf-617c22a27767","Type":"ContainerDied","Data":"49e7f664c799939d8bdced27171b1fe3b55355eb0cba5f2a3ba3ac2d09f0a838"} Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.252095 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e7f664c799939d8bdced27171b1fe3b55355eb0cba5f2a3ba3ac2d09f0a838" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.252091 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.313706 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p"] Jan 27 07:46:21 crc kubenswrapper[4764]: E0127 07:46:21.314196 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405eb05b-d23b-4ef5-b1bf-617c22a27767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.314225 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="405eb05b-d23b-4ef5-b1bf-617c22a27767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.314542 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="405eb05b-d23b-4ef5-b1bf-617c22a27767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.315341 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.317843 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.318740 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.318947 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.319116 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.322946 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p"] Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.404883 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkrz2\" (UniqueName: \"kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.404971 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.405178 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.507279 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.507685 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkrz2\" (UniqueName: \"kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.507720 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.513118 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.521995 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.525904 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkrz2\" (UniqueName: \"kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4kt4p\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:21 crc kubenswrapper[4764]: I0127 07:46:21.633292 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:22 crc kubenswrapper[4764]: I0127 07:46:22.150419 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p"] Jan 27 07:46:22 crc kubenswrapper[4764]: I0127 07:46:22.260032 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" event={"ID":"302747f7-58c1-4c8d-8e21-e713bb849750","Type":"ContainerStarted","Data":"a6ea8238fdc876c229f77ca0439b6ec2881135940c22fea3633c721be9d1de1c"} Jan 27 07:46:23 crc kubenswrapper[4764]: I0127 07:46:23.043881 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r44mw"] Jan 27 07:46:23 crc kubenswrapper[4764]: I0127 07:46:23.064111 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r44mw"] Jan 27 07:46:23 crc kubenswrapper[4764]: I0127 07:46:23.288422 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" event={"ID":"302747f7-58c1-4c8d-8e21-e713bb849750","Type":"ContainerStarted","Data":"c3ab654b12b91ee51986cd468b35b2f8e73d8ae2a7acb512da82b56cd2ba3be2"} Jan 27 07:46:23 crc kubenswrapper[4764]: I0127 07:46:23.322722 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" podStartSLOduration=1.592507742 podStartE2EDuration="2.322702182s" podCreationTimestamp="2026-01-27 07:46:21 +0000 UTC" firstStartedPulling="2026-01-27 07:46:22.157580628 +0000 UTC m=+1794.753203154" lastFinishedPulling="2026-01-27 07:46:22.887775028 +0000 UTC m=+1795.483397594" observedRunningTime="2026-01-27 07:46:23.312828982 +0000 UTC m=+1795.908451538" watchObservedRunningTime="2026-01-27 07:46:23.322702182 +0000 UTC m=+1795.918324718" Jan 27 07:46:24 crc kubenswrapper[4764]: I0127 07:46:24.457147 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd369b40-e130-41c4-bc59-216bb4e60d7c" path="/var/lib/kubelet/pods/cd369b40-e130-41c4-bc59-216bb4e60d7c/volumes" Jan 27 07:46:29 crc kubenswrapper[4764]: I0127 07:46:29.438646 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:46:29 crc kubenswrapper[4764]: E0127 07:46:29.439260 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:46:42 crc kubenswrapper[4764]: I0127 07:46:42.922392 4764 scope.go:117] "RemoveContainer" containerID="3cb75292fb0c65e7de0017662858f0ff4945a05a42368411e5cb3fbadbf946cb" Jan 27 07:46:42 crc kubenswrapper[4764]: I0127 07:46:42.979253 4764 scope.go:117] "RemoveContainer" containerID="8b34dc3388d15a76af579da4c3f2960d3945d3156b0bf5700ebb78ce8a57c5b9" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.003945 4764 scope.go:117] "RemoveContainer" containerID="3b7df24d4dd85d91b68b23275463d229caee8743e38cce4bc92bde110c2d9a0b" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.045536 4764 scope.go:117] "RemoveContainer" containerID="f31ac71fece61dc24e009efc3b0e27cbd7e41b087b94aa01e6abbaebb2ea849a" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.083773 4764 scope.go:117] "RemoveContainer" containerID="87a7268565b7afb6603a2216312cf17f56282e1d904f5f91bae7087a4734993b" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.118332 4764 scope.go:117] "RemoveContainer" containerID="71ac7ae19b8f42fa0cfb537bc872ef5a1eed60621f3f71bc06104a40509731e9" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.163647 4764 scope.go:117] "RemoveContainer" containerID="d89b4aed5fd83ad493e28436f3e3483e3901d068c3e76f8be76e760e0a870065" Jan 27 07:46:43 crc kubenswrapper[4764]: I0127 07:46:43.439388 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:46:43 crc kubenswrapper[4764]: E0127 07:46:43.439915 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:46:45 crc kubenswrapper[4764]: I0127 07:46:45.046987 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pkx7s"] Jan 27 07:46:45 crc kubenswrapper[4764]: I0127 07:46:45.054207 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pkx7s"] Jan 27 07:46:46 crc kubenswrapper[4764]: I0127 07:46:46.039942 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsxqz"] Jan 27 07:46:46 crc kubenswrapper[4764]: I0127 07:46:46.056134 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bsxqz"] Jan 27 07:46:46 crc kubenswrapper[4764]: I0127 07:46:46.450873 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4010c432-6801-418a-9a2f-d5b2f6798ce4" path="/var/lib/kubelet/pods/4010c432-6801-418a-9a2f-d5b2f6798ce4/volumes" Jan 27 07:46:46 crc kubenswrapper[4764]: I0127 07:46:46.451683 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0383119-401b-43eb-8966-2edcc3a90f83" path="/var/lib/kubelet/pods/e0383119-401b-43eb-8966-2edcc3a90f83/volumes" Jan 27 07:46:57 crc kubenswrapper[4764]: I0127 07:46:57.594221 4764 generic.go:334] "Generic (PLEG): container finished" podID="302747f7-58c1-4c8d-8e21-e713bb849750" containerID="c3ab654b12b91ee51986cd468b35b2f8e73d8ae2a7acb512da82b56cd2ba3be2" exitCode=0 Jan 27 07:46:57 crc kubenswrapper[4764]: I0127 07:46:57.594269 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" event={"ID":"302747f7-58c1-4c8d-8e21-e713bb849750","Type":"ContainerDied","Data":"c3ab654b12b91ee51986cd468b35b2f8e73d8ae2a7acb512da82b56cd2ba3be2"} Jan 27 07:46:58 crc kubenswrapper[4764]: I0127 07:46:58.438828 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:46:58 crc kubenswrapper[4764]: E0127 07:46:58.440383 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:46:58 crc kubenswrapper[4764]: I0127 07:46:58.969250 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.068155 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") pod \"302747f7-58c1-4c8d-8e21-e713bb849750\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.068218 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkrz2\" (UniqueName: \"kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2\") pod \"302747f7-58c1-4c8d-8e21-e713bb849750\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.068465 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory\") pod \"302747f7-58c1-4c8d-8e21-e713bb849750\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.073471 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2" (OuterVolumeSpecName: "kube-api-access-vkrz2") pod "302747f7-58c1-4c8d-8e21-e713bb849750" (UID: "302747f7-58c1-4c8d-8e21-e713bb849750"). InnerVolumeSpecName "kube-api-access-vkrz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:46:59 crc kubenswrapper[4764]: E0127 07:46:59.090108 4764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam podName:302747f7-58c1-4c8d-8e21-e713bb849750 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:59.590071677 +0000 UTC m=+1832.185694203 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam") pod "302747f7-58c1-4c8d-8e21-e713bb849750" (UID: "302747f7-58c1-4c8d-8e21-e713bb849750") : error deleting /var/lib/kubelet/pods/302747f7-58c1-4c8d-8e21-e713bb849750/volume-subpaths: remove /var/lib/kubelet/pods/302747f7-58c1-4c8d-8e21-e713bb849750/volume-subpaths: no such file or directory Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.092857 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory" (OuterVolumeSpecName: "inventory") pod "302747f7-58c1-4c8d-8e21-e713bb849750" (UID: "302747f7-58c1-4c8d-8e21-e713bb849750"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.170348 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.170375 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkrz2\" (UniqueName: \"kubernetes.io/projected/302747f7-58c1-4c8d-8e21-e713bb849750-kube-api-access-vkrz2\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.612022 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" event={"ID":"302747f7-58c1-4c8d-8e21-e713bb849750","Type":"ContainerDied","Data":"a6ea8238fdc876c229f77ca0439b6ec2881135940c22fea3633c721be9d1de1c"} Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.612062 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ea8238fdc876c229f77ca0439b6ec2881135940c22fea3633c721be9d1de1c" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.612397 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4kt4p" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.678690 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") pod \"302747f7-58c1-4c8d-8e21-e713bb849750\" (UID: \"302747f7-58c1-4c8d-8e21-e713bb849750\") " Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.683638 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "302747f7-58c1-4c8d-8e21-e713bb849750" (UID: "302747f7-58c1-4c8d-8e21-e713bb849750"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.703492 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp"] Jan 27 07:46:59 crc kubenswrapper[4764]: E0127 07:46:59.704403 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302747f7-58c1-4c8d-8e21-e713bb849750" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.704508 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="302747f7-58c1-4c8d-8e21-e713bb849750" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.704838 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="302747f7-58c1-4c8d-8e21-e713bb849750" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.705856 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.711583 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp"] Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.781139 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.781204 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.781410 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfz5\" (UniqueName: \"kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.781592 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/302747f7-58c1-4c8d-8e21-e713bb849750-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.883572 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.883636 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.883765 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwfz5\" (UniqueName: \"kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.889387 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.889703 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:46:59 crc kubenswrapper[4764]: I0127 07:46:59.925570 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwfz5\" (UniqueName: \"kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j55zp\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:47:00 crc kubenswrapper[4764]: I0127 07:47:00.055723 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:47:00 crc kubenswrapper[4764]: I0127 07:47:00.573386 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp"] Jan 27 07:47:00 crc kubenswrapper[4764]: I0127 07:47:00.619864 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" event={"ID":"430b1bd3-ef93-47e7-a02b-df097c4b44d4","Type":"ContainerStarted","Data":"b3afa4d0c1c2eda38eedf276f9225a94f883b0b3d73d266c9f7d877cf9257449"} Jan 27 07:47:01 crc kubenswrapper[4764]: I0127 07:47:01.628643 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" event={"ID":"430b1bd3-ef93-47e7-a02b-df097c4b44d4","Type":"ContainerStarted","Data":"3e534d5eabc48e4226081719a583d8d7c1ea466de2300bbae533b292509fae39"} Jan 27 07:47:01 crc kubenswrapper[4764]: I0127 07:47:01.652764 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" podStartSLOduration=2.017522698 podStartE2EDuration="2.652745117s" podCreationTimestamp="2026-01-27 07:46:59 +0000 UTC" firstStartedPulling="2026-01-27 07:47:00.576552408 +0000 UTC m=+1833.172174934" lastFinishedPulling="2026-01-27 07:47:01.211774827 +0000 UTC m=+1833.807397353" observedRunningTime="2026-01-27 07:47:01.643373162 +0000 UTC m=+1834.238995728" watchObservedRunningTime="2026-01-27 07:47:01.652745117 +0000 UTC m=+1834.248367643" Jan 27 07:47:12 crc kubenswrapper[4764]: I0127 07:47:12.439250 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:47:12 crc kubenswrapper[4764]: E0127 07:47:12.440119 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:47:23 crc kubenswrapper[4764]: I0127 07:47:23.439559 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:47:23 crc kubenswrapper[4764]: E0127 07:47:23.440556 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:47:30 crc kubenswrapper[4764]: I0127 07:47:30.040264 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-g2ngw"] Jan 27 07:47:30 crc kubenswrapper[4764]: I0127 07:47:30.052262 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-g2ngw"] Jan 27 07:47:30 crc kubenswrapper[4764]: I0127 07:47:30.453768 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6454f8af-d141-4b00-a06b-b5e2af100376" path="/var/lib/kubelet/pods/6454f8af-d141-4b00-a06b-b5e2af100376/volumes" Jan 27 07:47:36 crc kubenswrapper[4764]: I0127 07:47:36.444766 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:47:36 crc kubenswrapper[4764]: E0127 07:47:36.448957 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:47:43 crc kubenswrapper[4764]: I0127 07:47:43.324826 4764 scope.go:117] "RemoveContainer" containerID="cc915f0160e9989b3ef76cb8c02558efbd07ba674a8bbda27262e8d45eef3a06" Jan 27 07:47:43 crc kubenswrapper[4764]: I0127 07:47:43.380345 4764 scope.go:117] "RemoveContainer" containerID="b45a604b9224386bad3e7ed18cecde1231ed0adbcc6a706f86122141fab50eaf" Jan 27 07:47:43 crc kubenswrapper[4764]: I0127 07:47:43.445637 4764 scope.go:117] "RemoveContainer" containerID="3138b61e039dd9648b93b62155cd01eb309f57ab907e304f4bae65b3f80cbba5" Jan 27 07:47:50 crc kubenswrapper[4764]: I0127 07:47:50.086241 4764 generic.go:334] "Generic (PLEG): container finished" podID="430b1bd3-ef93-47e7-a02b-df097c4b44d4" containerID="3e534d5eabc48e4226081719a583d8d7c1ea466de2300bbae533b292509fae39" exitCode=0 Jan 27 07:47:50 crc kubenswrapper[4764]: I0127 07:47:50.086327 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" event={"ID":"430b1bd3-ef93-47e7-a02b-df097c4b44d4","Type":"ContainerDied","Data":"3e534d5eabc48e4226081719a583d8d7c1ea466de2300bbae533b292509fae39"} Jan 27 07:47:50 crc kubenswrapper[4764]: I0127 07:47:50.439383 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:47:50 crc kubenswrapper[4764]: E0127 07:47:50.439888 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.501939 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.621135 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwfz5\" (UniqueName: \"kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5\") pod \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.621290 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam\") pod \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.621366 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory\") pod \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\" (UID: \"430b1bd3-ef93-47e7-a02b-df097c4b44d4\") " Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.634164 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5" (OuterVolumeSpecName: "kube-api-access-cwfz5") pod "430b1bd3-ef93-47e7-a02b-df097c4b44d4" (UID: "430b1bd3-ef93-47e7-a02b-df097c4b44d4"). InnerVolumeSpecName "kube-api-access-cwfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.648595 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory" (OuterVolumeSpecName: "inventory") pod "430b1bd3-ef93-47e7-a02b-df097c4b44d4" (UID: "430b1bd3-ef93-47e7-a02b-df097c4b44d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.648612 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "430b1bd3-ef93-47e7-a02b-df097c4b44d4" (UID: "430b1bd3-ef93-47e7-a02b-df097c4b44d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.724476 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.724515 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwfz5\" (UniqueName: \"kubernetes.io/projected/430b1bd3-ef93-47e7-a02b-df097c4b44d4-kube-api-access-cwfz5\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:51 crc kubenswrapper[4764]: I0127 07:47:51.724531 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/430b1bd3-ef93-47e7-a02b-df097c4b44d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.102468 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" event={"ID":"430b1bd3-ef93-47e7-a02b-df097c4b44d4","Type":"ContainerDied","Data":"b3afa4d0c1c2eda38eedf276f9225a94f883b0b3d73d266c9f7d877cf9257449"} Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.102788 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3afa4d0c1c2eda38eedf276f9225a94f883b0b3d73d266c9f7d877cf9257449" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.102844 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j55zp" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.193675 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pnbrj"] Jan 27 07:47:52 crc kubenswrapper[4764]: E0127 07:47:52.194140 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430b1bd3-ef93-47e7-a02b-df097c4b44d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.194165 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="430b1bd3-ef93-47e7-a02b-df097c4b44d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.194371 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="430b1bd3-ef93-47e7-a02b-df097c4b44d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.195222 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.205576 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.205744 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.205753 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.206394 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.213634 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pnbrj"] Jan 27 07:47:52 crc kubenswrapper[4764]: E0127 07:47:52.261554 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod430b1bd3_ef93_47e7_a02b_df097c4b44d4.slice/crio-b3afa4d0c1c2eda38eedf276f9225a94f883b0b3d73d266c9f7d877cf9257449\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod430b1bd3_ef93_47e7_a02b_df097c4b44d4.slice\": RecentStats: unable to find data in memory cache]" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.337016 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhfr\" (UniqueName: \"kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.337086 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.337246 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.438998 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.439355 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhfr\" (UniqueName: \"kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.439420 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.444643 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.446669 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.460334 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhfr\" (UniqueName: \"kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr\") pod \"ssh-known-hosts-edpm-deployment-pnbrj\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:52 crc kubenswrapper[4764]: I0127 07:47:52.518547 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:47:53 crc kubenswrapper[4764]: I0127 07:47:53.060324 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pnbrj"] Jan 27 07:47:53 crc kubenswrapper[4764]: I0127 07:47:53.065783 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:47:53 crc kubenswrapper[4764]: I0127 07:47:53.114001 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" event={"ID":"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7","Type":"ContainerStarted","Data":"b3780e0779c744eee7c7afe87d4e621e2e8e8f15823438fa3855a88e7372907e"} Jan 27 07:47:54 crc kubenswrapper[4764]: I0127 07:47:54.127055 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" event={"ID":"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7","Type":"ContainerStarted","Data":"48e2affaebf182e31b2abf1600f2cd8ff48efa2f52adcdece182cd0a3b71c5c2"} Jan 27 07:47:54 crc kubenswrapper[4764]: I0127 07:47:54.152302 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" podStartSLOduration=1.618862646 podStartE2EDuration="2.152283398s" podCreationTimestamp="2026-01-27 07:47:52 +0000 UTC" firstStartedPulling="2026-01-27 07:47:53.065328385 +0000 UTC m=+1885.660950911" lastFinishedPulling="2026-01-27 07:47:53.598749137 +0000 UTC m=+1886.194371663" observedRunningTime="2026-01-27 07:47:54.143145159 +0000 UTC m=+1886.738767685" watchObservedRunningTime="2026-01-27 07:47:54.152283398 +0000 UTC m=+1886.747905914" Jan 27 07:48:01 crc kubenswrapper[4764]: I0127 07:48:01.185496 4764 generic.go:334] "Generic (PLEG): container finished" podID="b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" containerID="48e2affaebf182e31b2abf1600f2cd8ff48efa2f52adcdece182cd0a3b71c5c2" exitCode=0 Jan 27 07:48:01 crc kubenswrapper[4764]: I0127 07:48:01.185606 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" event={"ID":"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7","Type":"ContainerDied","Data":"48e2affaebf182e31b2abf1600f2cd8ff48efa2f52adcdece182cd0a3b71c5c2"} Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.656373 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.754355 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam\") pod \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.754481 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhfr\" (UniqueName: \"kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr\") pod \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.754747 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0\") pod \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\" (UID: \"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7\") " Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.778642 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr" (OuterVolumeSpecName: "kube-api-access-fhhfr") pod "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" (UID: "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7"). InnerVolumeSpecName "kube-api-access-fhhfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.781678 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" (UID: "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.782198 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" (UID: "b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.857476 4764 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.857505 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:02 crc kubenswrapper[4764]: I0127 07:48:02.857517 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhfr\" (UniqueName: \"kubernetes.io/projected/b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7-kube-api-access-fhhfr\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.205911 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" event={"ID":"b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7","Type":"ContainerDied","Data":"b3780e0779c744eee7c7afe87d4e621e2e8e8f15823438fa3855a88e7372907e"} Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.206245 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3780e0779c744eee7c7afe87d4e621e2e8e8f15823438fa3855a88e7372907e" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.205967 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pnbrj" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.296756 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw"] Jan 27 07:48:03 crc kubenswrapper[4764]: E0127 07:48:03.297197 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" containerName="ssh-known-hosts-edpm-deployment" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.297216 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" containerName="ssh-known-hosts-edpm-deployment" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.297469 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7" containerName="ssh-known-hosts-edpm-deployment" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.298205 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.299980 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.300641 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.308559 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.308668 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.312140 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw"] Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.468675 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.468758 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.468818 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nv5c\" (UniqueName: \"kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.570982 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.571089 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.571153 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nv5c\" (UniqueName: \"kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.575294 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.576172 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.587936 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nv5c\" (UniqueName: \"kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rnkw\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:03 crc kubenswrapper[4764]: I0127 07:48:03.678024 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:04 crc kubenswrapper[4764]: I0127 07:48:04.208002 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw"] Jan 27 07:48:04 crc kubenswrapper[4764]: I0127 07:48:04.438940 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:48:04 crc kubenswrapper[4764]: E0127 07:48:04.439170 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:48:05 crc kubenswrapper[4764]: I0127 07:48:05.246112 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" event={"ID":"d210edad-b0d1-4060-8c14-bb8f137338c7","Type":"ContainerStarted","Data":"a5f0544cd8415576e40a14e828047893d9746f4a7d8127b7f8b0e113b3ba3213"} Jan 27 07:48:05 crc kubenswrapper[4764]: I0127 07:48:05.246741 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" event={"ID":"d210edad-b0d1-4060-8c14-bb8f137338c7","Type":"ContainerStarted","Data":"0e638e0245fefda203a142e30d70a79fb5e48c25a1fc656c073ef600e3735a0c"} Jan 27 07:48:05 crc kubenswrapper[4764]: I0127 07:48:05.263147 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" podStartSLOduration=1.8614723469999999 podStartE2EDuration="2.263104073s" podCreationTimestamp="2026-01-27 07:48:03 +0000 UTC" firstStartedPulling="2026-01-27 07:48:04.214690082 +0000 UTC m=+1896.810312608" lastFinishedPulling="2026-01-27 07:48:04.616321808 +0000 UTC m=+1897.211944334" observedRunningTime="2026-01-27 07:48:05.262929828 +0000 UTC m=+1897.858552374" watchObservedRunningTime="2026-01-27 07:48:05.263104073 +0000 UTC m=+1897.858726609" Jan 27 07:48:12 crc kubenswrapper[4764]: E0127 07:48:12.723332 4764 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd210edad_b0d1_4060_8c14_bb8f137338c7.slice/crio-a5f0544cd8415576e40a14e828047893d9746f4a7d8127b7f8b0e113b3ba3213.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:48:13 crc kubenswrapper[4764]: I0127 07:48:13.314554 4764 generic.go:334] "Generic (PLEG): container finished" podID="d210edad-b0d1-4060-8c14-bb8f137338c7" containerID="a5f0544cd8415576e40a14e828047893d9746f4a7d8127b7f8b0e113b3ba3213" exitCode=0 Jan 27 07:48:13 crc kubenswrapper[4764]: I0127 07:48:13.314647 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" event={"ID":"d210edad-b0d1-4060-8c14-bb8f137338c7","Type":"ContainerDied","Data":"a5f0544cd8415576e40a14e828047893d9746f4a7d8127b7f8b0e113b3ba3213"} Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.811669 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.891709 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory\") pod \"d210edad-b0d1-4060-8c14-bb8f137338c7\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.891769 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nv5c\" (UniqueName: \"kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c\") pod \"d210edad-b0d1-4060-8c14-bb8f137338c7\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.891806 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam\") pod \"d210edad-b0d1-4060-8c14-bb8f137338c7\" (UID: \"d210edad-b0d1-4060-8c14-bb8f137338c7\") " Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.901725 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c" (OuterVolumeSpecName: "kube-api-access-7nv5c") pod "d210edad-b0d1-4060-8c14-bb8f137338c7" (UID: "d210edad-b0d1-4060-8c14-bb8f137338c7"). InnerVolumeSpecName "kube-api-access-7nv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.922785 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory" (OuterVolumeSpecName: "inventory") pod "d210edad-b0d1-4060-8c14-bb8f137338c7" (UID: "d210edad-b0d1-4060-8c14-bb8f137338c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.951498 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d210edad-b0d1-4060-8c14-bb8f137338c7" (UID: "d210edad-b0d1-4060-8c14-bb8f137338c7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.994171 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.994211 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nv5c\" (UniqueName: \"kubernetes.io/projected/d210edad-b0d1-4060-8c14-bb8f137338c7-kube-api-access-7nv5c\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:14 crc kubenswrapper[4764]: I0127 07:48:14.994225 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d210edad-b0d1-4060-8c14-bb8f137338c7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.333654 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" event={"ID":"d210edad-b0d1-4060-8c14-bb8f137338c7","Type":"ContainerDied","Data":"0e638e0245fefda203a142e30d70a79fb5e48c25a1fc656c073ef600e3735a0c"} Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.333982 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e638e0245fefda203a142e30d70a79fb5e48c25a1fc656c073ef600e3735a0c" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.333730 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rnkw" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.439906 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh"] Jan 27 07:48:15 crc kubenswrapper[4764]: E0127 07:48:15.440570 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d210edad-b0d1-4060-8c14-bb8f137338c7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.440647 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d210edad-b0d1-4060-8c14-bb8f137338c7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.440887 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="d210edad-b0d1-4060-8c14-bb8f137338c7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.441726 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.444814 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.445045 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.445523 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.445529 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.455518 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh"] Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.502898 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.503185 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.503772 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzj77\" (UniqueName: \"kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.606207 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.606696 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzj77\" (UniqueName: \"kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.606847 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.611740 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.615044 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.622560 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzj77\" (UniqueName: \"kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:15 crc kubenswrapper[4764]: I0127 07:48:15.772498 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:16 crc kubenswrapper[4764]: I0127 07:48:16.344646 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh"] Jan 27 07:48:17 crc kubenswrapper[4764]: I0127 07:48:17.359558 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" event={"ID":"cae6078e-ba1a-4ada-89be-3d6b35993b05","Type":"ContainerStarted","Data":"9efd801e3c7731f0d6d2f04000b68a0d0a887d6351d063ca0252dbed82e93b90"} Jan 27 07:48:17 crc kubenswrapper[4764]: I0127 07:48:17.360129 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" event={"ID":"cae6078e-ba1a-4ada-89be-3d6b35993b05","Type":"ContainerStarted","Data":"da90168201c2166480435f4733ce774bec4c7abe3becf3057b8eb849f36fbe45"} Jan 27 07:48:17 crc kubenswrapper[4764]: I0127 07:48:17.382135 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" podStartSLOduration=1.750502271 podStartE2EDuration="2.382108501s" podCreationTimestamp="2026-01-27 07:48:15 +0000 UTC" firstStartedPulling="2026-01-27 07:48:16.346547021 +0000 UTC m=+1908.942169547" lastFinishedPulling="2026-01-27 07:48:16.978153261 +0000 UTC m=+1909.573775777" observedRunningTime="2026-01-27 07:48:17.3743772 +0000 UTC m=+1909.969999736" watchObservedRunningTime="2026-01-27 07:48:17.382108501 +0000 UTC m=+1909.977731027" Jan 27 07:48:17 crc kubenswrapper[4764]: I0127 07:48:17.438926 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:48:17 crc kubenswrapper[4764]: E0127 07:48:17.439209 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:48:27 crc kubenswrapper[4764]: I0127 07:48:27.450873 4764 generic.go:334] "Generic (PLEG): container finished" podID="cae6078e-ba1a-4ada-89be-3d6b35993b05" containerID="9efd801e3c7731f0d6d2f04000b68a0d0a887d6351d063ca0252dbed82e93b90" exitCode=0 Jan 27 07:48:27 crc kubenswrapper[4764]: I0127 07:48:27.450952 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" event={"ID":"cae6078e-ba1a-4ada-89be-3d6b35993b05","Type":"ContainerDied","Data":"9efd801e3c7731f0d6d2f04000b68a0d0a887d6351d063ca0252dbed82e93b90"} Jan 27 07:48:28 crc kubenswrapper[4764]: I0127 07:48:28.869899 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:28 crc kubenswrapper[4764]: I0127 07:48:28.981657 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory\") pod \"cae6078e-ba1a-4ada-89be-3d6b35993b05\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " Jan 27 07:48:28 crc kubenswrapper[4764]: I0127 07:48:28.982130 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam\") pod \"cae6078e-ba1a-4ada-89be-3d6b35993b05\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " Jan 27 07:48:28 crc kubenswrapper[4764]: I0127 07:48:28.982190 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzj77\" (UniqueName: \"kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77\") pod \"cae6078e-ba1a-4ada-89be-3d6b35993b05\" (UID: \"cae6078e-ba1a-4ada-89be-3d6b35993b05\") " Jan 27 07:48:28 crc kubenswrapper[4764]: I0127 07:48:28.989752 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77" (OuterVolumeSpecName: "kube-api-access-gzj77") pod "cae6078e-ba1a-4ada-89be-3d6b35993b05" (UID: "cae6078e-ba1a-4ada-89be-3d6b35993b05"). InnerVolumeSpecName "kube-api-access-gzj77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.010794 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory" (OuterVolumeSpecName: "inventory") pod "cae6078e-ba1a-4ada-89be-3d6b35993b05" (UID: "cae6078e-ba1a-4ada-89be-3d6b35993b05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.021584 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cae6078e-ba1a-4ada-89be-3d6b35993b05" (UID: "cae6078e-ba1a-4ada-89be-3d6b35993b05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.084175 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.084210 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzj77\" (UniqueName: \"kubernetes.io/projected/cae6078e-ba1a-4ada-89be-3d6b35993b05-kube-api-access-gzj77\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.084222 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cae6078e-ba1a-4ada-89be-3d6b35993b05-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.470302 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" event={"ID":"cae6078e-ba1a-4ada-89be-3d6b35993b05","Type":"ContainerDied","Data":"da90168201c2166480435f4733ce774bec4c7abe3becf3057b8eb849f36fbe45"} Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.470344 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da90168201c2166480435f4733ce774bec4c7abe3becf3057b8eb849f36fbe45" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.470404 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.618069 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp"] Jan 27 07:48:29 crc kubenswrapper[4764]: E0127 07:48:29.618511 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae6078e-ba1a-4ada-89be-3d6b35993b05" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.618536 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae6078e-ba1a-4ada-89be-3d6b35993b05" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.618773 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae6078e-ba1a-4ada-89be-3d6b35993b05" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.619527 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.621868 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.622351 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.622414 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.622361 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.622581 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.622704 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.623347 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.623429 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.629267 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp"] Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.697972 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698017 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698042 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698071 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698094 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698111 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698142 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698171 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698217 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698242 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698293 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6l9m\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698330 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.698349 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799695 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799761 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799837 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799871 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799915 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.799949 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6l9m\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800013 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800039 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800090 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800119 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800149 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800190 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800223 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.800249 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.806172 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.806704 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.806768 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.807982 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.808004 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.809614 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.809724 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.809878 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.809993 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.810345 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.810915 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.811095 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.818357 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.824782 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6l9m\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:29 crc kubenswrapper[4764]: I0127 07:48:29.994433 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:48:30 crc kubenswrapper[4764]: I0127 07:48:30.644202 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp"] Jan 27 07:48:31 crc kubenswrapper[4764]: I0127 07:48:31.438639 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:48:31 crc kubenswrapper[4764]: E0127 07:48:31.439557 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:48:31 crc kubenswrapper[4764]: I0127 07:48:31.498627 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" event={"ID":"faa09b18-d734-422b-8dcc-ff3aad34a549","Type":"ContainerStarted","Data":"a5cc71220269c07c5dccf7fcda5a1ee8dc3d2e27b38549c242aa2e6dd2f64f69"} Jan 27 07:48:31 crc kubenswrapper[4764]: I0127 07:48:31.498765 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" event={"ID":"faa09b18-d734-422b-8dcc-ff3aad34a549","Type":"ContainerStarted","Data":"8ab544a96a316060c9c792c31d335f816905d66be981de1295548298f4bcc7c5"} Jan 27 07:48:31 crc kubenswrapper[4764]: I0127 07:48:31.530695 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" podStartSLOduration=2.082670514 podStartE2EDuration="2.530670956s" podCreationTimestamp="2026-01-27 07:48:29 +0000 UTC" firstStartedPulling="2026-01-27 07:48:30.64521322 +0000 UTC m=+1923.240835766" lastFinishedPulling="2026-01-27 07:48:31.093213682 +0000 UTC m=+1923.688836208" observedRunningTime="2026-01-27 07:48:31.524750304 +0000 UTC m=+1924.120372880" watchObservedRunningTime="2026-01-27 07:48:31.530670956 +0000 UTC m=+1924.126293492" Jan 27 07:48:42 crc kubenswrapper[4764]: I0127 07:48:42.438680 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:48:42 crc kubenswrapper[4764]: E0127 07:48:42.439798 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:48:53 crc kubenswrapper[4764]: I0127 07:48:53.439543 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:48:53 crc kubenswrapper[4764]: E0127 07:48:53.440295 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:49:05 crc kubenswrapper[4764]: I0127 07:49:05.438738 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:49:05 crc kubenswrapper[4764]: E0127 07:49:05.440085 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:49:08 crc kubenswrapper[4764]: I0127 07:49:08.851336 4764 generic.go:334] "Generic (PLEG): container finished" podID="faa09b18-d734-422b-8dcc-ff3aad34a549" containerID="a5cc71220269c07c5dccf7fcda5a1ee8dc3d2e27b38549c242aa2e6dd2f64f69" exitCode=0 Jan 27 07:49:08 crc kubenswrapper[4764]: I0127 07:49:08.851425 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" event={"ID":"faa09b18-d734-422b-8dcc-ff3aad34a549","Type":"ContainerDied","Data":"a5cc71220269c07c5dccf7fcda5a1ee8dc3d2e27b38549c242aa2e6dd2f64f69"} Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.295811 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.351844 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.351953 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352016 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352069 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352114 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352149 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352172 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352223 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352271 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352319 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6l9m\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352379 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352423 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352456 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.352494 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam\") pod \"faa09b18-d734-422b-8dcc-ff3aad34a549\" (UID: \"faa09b18-d734-422b-8dcc-ff3aad34a549\") " Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.358414 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.358638 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.359142 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.359162 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m" (OuterVolumeSpecName: "kube-api-access-v6l9m") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "kube-api-access-v6l9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.359506 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.360240 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.360889 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.361861 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.361989 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.362007 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.362659 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.363207 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.386976 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.387882 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory" (OuterVolumeSpecName: "inventory") pod "faa09b18-d734-422b-8dcc-ff3aad34a549" (UID: "faa09b18-d734-422b-8dcc-ff3aad34a549"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454316 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454349 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454361 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454375 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454390 4764 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454402 4764 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454416 4764 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454427 4764 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454442 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6l9m\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-kube-api-access-v6l9m\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454452 4764 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454476 4764 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454486 4764 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454494 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa09b18-d734-422b-8dcc-ff3aad34a549-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.454504 4764 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faa09b18-d734-422b-8dcc-ff3aad34a549-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.876526 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" event={"ID":"faa09b18-d734-422b-8dcc-ff3aad34a549","Type":"ContainerDied","Data":"8ab544a96a316060c9c792c31d335f816905d66be981de1295548298f4bcc7c5"} Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.876578 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab544a96a316060c9c792c31d335f816905d66be981de1295548298f4bcc7c5" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.876602 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.977935 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7"] Jan 27 07:49:10 crc kubenswrapper[4764]: E0127 07:49:10.978310 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa09b18-d734-422b-8dcc-ff3aad34a549" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.978325 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa09b18-d734-422b-8dcc-ff3aad34a549" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.978521 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa09b18-d734-422b-8dcc-ff3aad34a549" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.979121 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.981528 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.981541 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.982089 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.983204 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:49:10 crc kubenswrapper[4764]: I0127 07:49:10.983411 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.000411 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7"] Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.067271 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.067398 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.067475 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.067526 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.067629 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75c9w\" (UniqueName: \"kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.170040 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.170606 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75c9w\" (UniqueName: \"kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.170804 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.170974 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.171122 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.172085 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.175456 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.175929 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.178286 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.191142 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75c9w\" (UniqueName: \"kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ndkg7\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.301947 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:49:11 crc kubenswrapper[4764]: I0127 07:49:11.880874 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7"] Jan 27 07:49:12 crc kubenswrapper[4764]: I0127 07:49:12.898203 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" event={"ID":"c26586b8-9b42-42de-9b7d-4b8081ee2a67","Type":"ContainerStarted","Data":"3c6c2203835799abb1ed8db24ee9d5fc2a4dd8cc80e7048481adce721b80bfac"} Jan 27 07:49:12 crc kubenswrapper[4764]: I0127 07:49:12.898589 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" event={"ID":"c26586b8-9b42-42de-9b7d-4b8081ee2a67","Type":"ContainerStarted","Data":"c8d4fdb7291fbf36212744539a0f2251d082f744f179f3432e09429ee11b6d66"} Jan 27 07:49:12 crc kubenswrapper[4764]: I0127 07:49:12.923697 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" podStartSLOduration=2.3203814 podStartE2EDuration="2.923673062s" podCreationTimestamp="2026-01-27 07:49:10 +0000 UTC" firstStartedPulling="2026-01-27 07:49:11.889731361 +0000 UTC m=+1964.485353887" lastFinishedPulling="2026-01-27 07:49:12.493022983 +0000 UTC m=+1965.088645549" observedRunningTime="2026-01-27 07:49:12.914797912 +0000 UTC m=+1965.510420468" watchObservedRunningTime="2026-01-27 07:49:12.923673062 +0000 UTC m=+1965.519295598" Jan 27 07:49:17 crc kubenswrapper[4764]: I0127 07:49:17.439225 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:49:17 crc kubenswrapper[4764]: E0127 07:49:17.440469 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.729473 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.737709 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.759075 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.878699 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.878964 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wwg\" (UniqueName: \"kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.879299 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.981680 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wwg\" (UniqueName: \"kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.981798 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.981858 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.982350 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:28 crc kubenswrapper[4764]: I0127 07:49:28.982389 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:29 crc kubenswrapper[4764]: I0127 07:49:29.011785 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wwg\" (UniqueName: \"kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg\") pod \"redhat-operators-54r77\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:29 crc kubenswrapper[4764]: I0127 07:49:29.070655 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:29 crc kubenswrapper[4764]: W0127 07:49:29.552180 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f56c117_67d5_4f1b_a7da_d24685913f51.slice/crio-64db5c1e0235207be962cf2af981d3f9047183d2d492841f60c69e1cf7b3c0b8 WatchSource:0}: Error finding container 64db5c1e0235207be962cf2af981d3f9047183d2d492841f60c69e1cf7b3c0b8: Status 404 returned error can't find the container with id 64db5c1e0235207be962cf2af981d3f9047183d2d492841f60c69e1cf7b3c0b8 Jan 27 07:49:29 crc kubenswrapper[4764]: I0127 07:49:29.555954 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:30 crc kubenswrapper[4764]: I0127 07:49:30.049211 4764 generic.go:334] "Generic (PLEG): container finished" podID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerID="4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63" exitCode=0 Jan 27 07:49:30 crc kubenswrapper[4764]: I0127 07:49:30.049250 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerDied","Data":"4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63"} Jan 27 07:49:30 crc kubenswrapper[4764]: I0127 07:49:30.049651 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerStarted","Data":"64db5c1e0235207be962cf2af981d3f9047183d2d492841f60c69e1cf7b3c0b8"} Jan 27 07:49:31 crc kubenswrapper[4764]: I0127 07:49:31.057831 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerStarted","Data":"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7"} Jan 27 07:49:32 crc kubenswrapper[4764]: I0127 07:49:32.069104 4764 generic.go:334] "Generic (PLEG): container finished" podID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerID="a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7" exitCode=0 Jan 27 07:49:32 crc kubenswrapper[4764]: I0127 07:49:32.069202 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerDied","Data":"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7"} Jan 27 07:49:32 crc kubenswrapper[4764]: I0127 07:49:32.438558 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:49:33 crc kubenswrapper[4764]: I0127 07:49:33.079261 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811"} Jan 27 07:49:33 crc kubenswrapper[4764]: I0127 07:49:33.081200 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerStarted","Data":"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22"} Jan 27 07:49:34 crc kubenswrapper[4764]: I0127 07:49:34.112740 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54r77" podStartSLOduration=3.350405872 podStartE2EDuration="6.112721913s" podCreationTimestamp="2026-01-27 07:49:28 +0000 UTC" firstStartedPulling="2026-01-27 07:49:30.050705117 +0000 UTC m=+1982.646327643" lastFinishedPulling="2026-01-27 07:49:32.813021168 +0000 UTC m=+1985.408643684" observedRunningTime="2026-01-27 07:49:34.108214932 +0000 UTC m=+1986.703837458" watchObservedRunningTime="2026-01-27 07:49:34.112721913 +0000 UTC m=+1986.708344439" Jan 27 07:49:39 crc kubenswrapper[4764]: I0127 07:49:39.071825 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:39 crc kubenswrapper[4764]: I0127 07:49:39.072363 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:39 crc kubenswrapper[4764]: I0127 07:49:39.124911 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:39 crc kubenswrapper[4764]: I0127 07:49:39.186357 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:39 crc kubenswrapper[4764]: I0127 07:49:39.358101 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.150106 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54r77" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="registry-server" containerID="cri-o://0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22" gracePeriod=2 Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.622238 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.670590 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities\") pod \"5f56c117-67d5-4f1b-a7da-d24685913f51\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.670716 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content\") pod \"5f56c117-67d5-4f1b-a7da-d24685913f51\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.670788 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wwg\" (UniqueName: \"kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg\") pod \"5f56c117-67d5-4f1b-a7da-d24685913f51\" (UID: \"5f56c117-67d5-4f1b-a7da-d24685913f51\") " Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.671588 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities" (OuterVolumeSpecName: "utilities") pod "5f56c117-67d5-4f1b-a7da-d24685913f51" (UID: "5f56c117-67d5-4f1b-a7da-d24685913f51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.677313 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg" (OuterVolumeSpecName: "kube-api-access-m4wwg") pod "5f56c117-67d5-4f1b-a7da-d24685913f51" (UID: "5f56c117-67d5-4f1b-a7da-d24685913f51"). InnerVolumeSpecName "kube-api-access-m4wwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.772423 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wwg\" (UniqueName: \"kubernetes.io/projected/5f56c117-67d5-4f1b-a7da-d24685913f51-kube-api-access-m4wwg\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.772465 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.799877 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f56c117-67d5-4f1b-a7da-d24685913f51" (UID: "5f56c117-67d5-4f1b-a7da-d24685913f51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:49:41 crc kubenswrapper[4764]: I0127 07:49:41.873636 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56c117-67d5-4f1b-a7da-d24685913f51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.163123 4764 generic.go:334] "Generic (PLEG): container finished" podID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerID="0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22" exitCode=0 Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.163197 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerDied","Data":"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22"} Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.163384 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54r77" event={"ID":"5f56c117-67d5-4f1b-a7da-d24685913f51","Type":"ContainerDied","Data":"64db5c1e0235207be962cf2af981d3f9047183d2d492841f60c69e1cf7b3c0b8"} Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.163410 4764 scope.go:117] "RemoveContainer" containerID="0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.163237 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54r77" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.198740 4764 scope.go:117] "RemoveContainer" containerID="a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.210376 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.217715 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54r77"] Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.224236 4764 scope.go:117] "RemoveContainer" containerID="4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.274491 4764 scope.go:117] "RemoveContainer" containerID="0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22" Jan 27 07:49:42 crc kubenswrapper[4764]: E0127 07:49:42.275087 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22\": container with ID starting with 0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22 not found: ID does not exist" containerID="0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.275154 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22"} err="failed to get container status \"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22\": rpc error: code = NotFound desc = could not find container \"0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22\": container with ID starting with 0b805030c1ba25fad331f7fde3baf8e75e962216d90b51430abb201aadc42f22 not found: ID does not exist" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.275199 4764 scope.go:117] "RemoveContainer" containerID="a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7" Jan 27 07:49:42 crc kubenswrapper[4764]: E0127 07:49:42.275751 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7\": container with ID starting with a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7 not found: ID does not exist" containerID="a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.275784 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7"} err="failed to get container status \"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7\": rpc error: code = NotFound desc = could not find container \"a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7\": container with ID starting with a42c317293ae7c73127022173a7c60e1c0067fb635e4be821518cb07d98981d7 not found: ID does not exist" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.275809 4764 scope.go:117] "RemoveContainer" containerID="4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63" Jan 27 07:49:42 crc kubenswrapper[4764]: E0127 07:49:42.276226 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63\": container with ID starting with 4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63 not found: ID does not exist" containerID="4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.276269 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63"} err="failed to get container status \"4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63\": rpc error: code = NotFound desc = could not find container \"4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63\": container with ID starting with 4c83c2a4062ae2f9f3e12ddadeb4d812a635b9a4244f594ea3b1b097fb8edb63 not found: ID does not exist" Jan 27 07:49:42 crc kubenswrapper[4764]: I0127 07:49:42.467199 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" path="/var/lib/kubelet/pods/5f56c117-67d5-4f1b-a7da-d24685913f51/volumes" Jan 27 07:50:04 crc kubenswrapper[4764]: I0127 07:50:04.247055 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-db5d878f-pf9rf" podUID="69437692-e8cb-4991-a2de-1434f68c7201" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 07:50:16 crc kubenswrapper[4764]: I0127 07:50:16.500131 4764 generic.go:334] "Generic (PLEG): container finished" podID="c26586b8-9b42-42de-9b7d-4b8081ee2a67" containerID="3c6c2203835799abb1ed8db24ee9d5fc2a4dd8cc80e7048481adce721b80bfac" exitCode=0 Jan 27 07:50:16 crc kubenswrapper[4764]: I0127 07:50:16.500220 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" event={"ID":"c26586b8-9b42-42de-9b7d-4b8081ee2a67","Type":"ContainerDied","Data":"3c6c2203835799abb1ed8db24ee9d5fc2a4dd8cc80e7048481adce721b80bfac"} Jan 27 07:50:17 crc kubenswrapper[4764]: I0127 07:50:17.974059 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.082219 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam\") pod \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.082499 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle\") pod \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.082540 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory\") pod \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.082574 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75c9w\" (UniqueName: \"kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w\") pod \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.082610 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0\") pod \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\" (UID: \"c26586b8-9b42-42de-9b7d-4b8081ee2a67\") " Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.087601 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w" (OuterVolumeSpecName: "kube-api-access-75c9w") pod "c26586b8-9b42-42de-9b7d-4b8081ee2a67" (UID: "c26586b8-9b42-42de-9b7d-4b8081ee2a67"). InnerVolumeSpecName "kube-api-access-75c9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.089018 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c26586b8-9b42-42de-9b7d-4b8081ee2a67" (UID: "c26586b8-9b42-42de-9b7d-4b8081ee2a67"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.110492 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c26586b8-9b42-42de-9b7d-4b8081ee2a67" (UID: "c26586b8-9b42-42de-9b7d-4b8081ee2a67"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.114920 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c26586b8-9b42-42de-9b7d-4b8081ee2a67" (UID: "c26586b8-9b42-42de-9b7d-4b8081ee2a67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.124205 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory" (OuterVolumeSpecName: "inventory") pod "c26586b8-9b42-42de-9b7d-4b8081ee2a67" (UID: "c26586b8-9b42-42de-9b7d-4b8081ee2a67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.185800 4764 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.185845 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.185859 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75c9w\" (UniqueName: \"kubernetes.io/projected/c26586b8-9b42-42de-9b7d-4b8081ee2a67-kube-api-access-75c9w\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.185874 4764 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.185888 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c26586b8-9b42-42de-9b7d-4b8081ee2a67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.521811 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" event={"ID":"c26586b8-9b42-42de-9b7d-4b8081ee2a67","Type":"ContainerDied","Data":"c8d4fdb7291fbf36212744539a0f2251d082f744f179f3432e09429ee11b6d66"} Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.521857 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d4fdb7291fbf36212744539a0f2251d082f744f179f3432e09429ee11b6d66" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.521899 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ndkg7" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612279 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k"] Jan 27 07:50:18 crc kubenswrapper[4764]: E0127 07:50:18.612677 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="extract-content" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612698 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="extract-content" Jan 27 07:50:18 crc kubenswrapper[4764]: E0127 07:50:18.612714 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="registry-server" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612720 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="registry-server" Jan 27 07:50:18 crc kubenswrapper[4764]: E0127 07:50:18.612735 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="extract-utilities" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612742 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="extract-utilities" Jan 27 07:50:18 crc kubenswrapper[4764]: E0127 07:50:18.612751 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26586b8-9b42-42de-9b7d-4b8081ee2a67" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612757 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26586b8-9b42-42de-9b7d-4b8081ee2a67" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612937 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26586b8-9b42-42de-9b7d-4b8081ee2a67" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.612949 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f56c117-67d5-4f1b-a7da-d24685913f51" containerName="registry-server" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.613543 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.616782 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.616873 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.616929 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.616995 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.617017 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.621318 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.626322 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k"] Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.696727 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.696800 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.696854 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.696883 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.697087 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9mz\" (UniqueName: \"kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.697301 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.798993 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.799059 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g9mz\" (UniqueName: \"kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.799135 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.799185 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.799243 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.799307 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.805958 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.811002 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.814721 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.815029 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.815851 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.818232 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g9mz\" (UniqueName: \"kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:18 crc kubenswrapper[4764]: I0127 07:50:18.936048 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:50:19 crc kubenswrapper[4764]: I0127 07:50:19.566618 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k"] Jan 27 07:50:20 crc kubenswrapper[4764]: I0127 07:50:20.550145 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" event={"ID":"c5cba19d-4bf2-4dba-b8e3-43594cec3acb","Type":"ContainerStarted","Data":"feab82a6e384d8995e8fa47802cac08d2e64ed606a579368fdfe9c3fb185e7e0"} Jan 27 07:50:20 crc kubenswrapper[4764]: I0127 07:50:20.550688 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" event={"ID":"c5cba19d-4bf2-4dba-b8e3-43594cec3acb","Type":"ContainerStarted","Data":"c16ee0760107b2f5497bf65ece76cd916fa8db167af4bcbe16d50eb625fc653d"} Jan 27 07:50:20 crc kubenswrapper[4764]: I0127 07:50:20.577613 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" podStartSLOduration=2.117226031 podStartE2EDuration="2.577598191s" podCreationTimestamp="2026-01-27 07:50:18 +0000 UTC" firstStartedPulling="2026-01-27 07:50:19.573546255 +0000 UTC m=+2032.169168781" lastFinishedPulling="2026-01-27 07:50:20.033918415 +0000 UTC m=+2032.629540941" observedRunningTime="2026-01-27 07:50:20.570028047 +0000 UTC m=+2033.165650583" watchObservedRunningTime="2026-01-27 07:50:20.577598191 +0000 UTC m=+2033.173220717" Jan 27 07:51:12 crc kubenswrapper[4764]: I0127 07:51:12.002500 4764 generic.go:334] "Generic (PLEG): container finished" podID="c5cba19d-4bf2-4dba-b8e3-43594cec3acb" containerID="feab82a6e384d8995e8fa47802cac08d2e64ed606a579368fdfe9c3fb185e7e0" exitCode=0 Jan 27 07:51:12 crc kubenswrapper[4764]: I0127 07:51:12.002575 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" event={"ID":"c5cba19d-4bf2-4dba-b8e3-43594cec3acb","Type":"ContainerDied","Data":"feab82a6e384d8995e8fa47802cac08d2e64ed606a579368fdfe9c3fb185e7e0"} Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.401681 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540172 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540265 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540360 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540391 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540421 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.540532 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g9mz\" (UniqueName: \"kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz\") pod \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\" (UID: \"c5cba19d-4bf2-4dba-b8e3-43594cec3acb\") " Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.547347 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz" (OuterVolumeSpecName: "kube-api-access-7g9mz") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "kube-api-access-7g9mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.547475 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.568388 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.569280 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory" (OuterVolumeSpecName: "inventory") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.573694 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.578571 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c5cba19d-4bf2-4dba-b8e3-43594cec3acb" (UID: "c5cba19d-4bf2-4dba-b8e3-43594cec3acb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643183 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g9mz\" (UniqueName: \"kubernetes.io/projected/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-kube-api-access-7g9mz\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643227 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643236 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643245 4764 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643257 4764 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:13 crc kubenswrapper[4764]: I0127 07:51:13.643269 4764 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5cba19d-4bf2-4dba-b8e3-43594cec3acb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.029850 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" event={"ID":"c5cba19d-4bf2-4dba-b8e3-43594cec3acb","Type":"ContainerDied","Data":"c16ee0760107b2f5497bf65ece76cd916fa8db167af4bcbe16d50eb625fc653d"} Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.029906 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16ee0760107b2f5497bf65ece76cd916fa8db167af4bcbe16d50eb625fc653d" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.029937 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.119122 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l"] Jan 27 07:51:14 crc kubenswrapper[4764]: E0127 07:51:14.119962 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5cba19d-4bf2-4dba-b8e3-43594cec3acb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.119989 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5cba19d-4bf2-4dba-b8e3-43594cec3acb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.120227 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5cba19d-4bf2-4dba-b8e3-43594cec3acb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.121004 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.125477 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.125564 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.125501 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.125769 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.125895 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.131872 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l"] Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.257379 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s98sk\" (UniqueName: \"kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.257495 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.257631 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.257875 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.257961 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.359867 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s98sk\" (UniqueName: \"kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.359989 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.360042 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.360108 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.360144 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.363796 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.363956 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.365172 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.370996 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.383252 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s98sk\" (UniqueName: \"kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4js7l\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.438360 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:51:14 crc kubenswrapper[4764]: I0127 07:51:14.961978 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l"] Jan 27 07:51:15 crc kubenswrapper[4764]: I0127 07:51:15.040781 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" event={"ID":"0bb121c0-ea4c-49b6-b818-e36e6b657d53","Type":"ContainerStarted","Data":"c0cc1d3770fb7d6e2542d74b44be5a48e7a4212a29ac15bc64bfd8e664dcf83e"} Jan 27 07:51:21 crc kubenswrapper[4764]: I0127 07:51:21.778183 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ce77f72a-8ed8-4216-b443-a1c5737a50e7" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 07:51:24 crc kubenswrapper[4764]: I0127 07:51:24.124083 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" event={"ID":"0bb121c0-ea4c-49b6-b818-e36e6b657d53","Type":"ContainerStarted","Data":"3e4a3d22e14fde4ae17d15746f0684d708ffe5b53d1c60914dc6949c8e5b503d"} Jan 27 07:51:24 crc kubenswrapper[4764]: I0127 07:51:24.171123 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" podStartSLOduration=1.5385741290000001 podStartE2EDuration="10.171098698s" podCreationTimestamp="2026-01-27 07:51:14 +0000 UTC" firstStartedPulling="2026-01-27 07:51:14.970185029 +0000 UTC m=+2087.565807545" lastFinishedPulling="2026-01-27 07:51:23.602709568 +0000 UTC m=+2096.198332114" observedRunningTime="2026-01-27 07:51:24.159909187 +0000 UTC m=+2096.755531713" watchObservedRunningTime="2026-01-27 07:51:24.171098698 +0000 UTC m=+2096.766721234" Jan 27 07:51:53 crc kubenswrapper[4764]: I0127 07:51:53.762791 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:51:53 crc kubenswrapper[4764]: I0127 07:51:53.763351 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:52:17 crc kubenswrapper[4764]: I0127 07:52:17.774543 4764 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:52:17 crc kubenswrapper[4764]: I0127 07:52:17.775810 4764 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:52:23 crc kubenswrapper[4764]: I0127 07:52:23.762776 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:52:23 crc kubenswrapper[4764]: I0127 07:52:23.763213 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.762434 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.763100 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.763158 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.764070 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.764152 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811" gracePeriod=600 Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.949741 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811" exitCode=0 Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.949820 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811"} Jan 27 07:52:53 crc kubenswrapper[4764]: I0127 07:52:53.950080 4764 scope.go:117] "RemoveContainer" containerID="8a31cc193d593cbe60cafeb087bb77c172bab961c02518a0f98810f80bc37bc6" Jan 27 07:52:54 crc kubenswrapper[4764]: I0127 07:52:54.962859 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a"} Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.333357 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.337170 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.345863 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.416734 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sn8n\" (UniqueName: \"kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.416946 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.417101 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.519338 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.519583 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sn8n\" (UniqueName: \"kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.519665 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.520059 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.520125 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.551105 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sn8n\" (UniqueName: \"kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n\") pod \"redhat-marketplace-nsqrf\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:30 crc kubenswrapper[4764]: I0127 07:53:30.663095 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:31 crc kubenswrapper[4764]: I0127 07:53:31.214874 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:31 crc kubenswrapper[4764]: I0127 07:53:31.301286 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerStarted","Data":"affec515d14d4d270c1c2bf4f90073b32d2e54651bfe0d35787f01f84ad04444"} Jan 27 07:53:32 crc kubenswrapper[4764]: I0127 07:53:32.311863 4764 generic.go:334] "Generic (PLEG): container finished" podID="d72e70c0-99b2-4446-95f1-302e19669e67" containerID="6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899" exitCode=0 Jan 27 07:53:32 crc kubenswrapper[4764]: I0127 07:53:32.311951 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerDied","Data":"6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899"} Jan 27 07:53:32 crc kubenswrapper[4764]: I0127 07:53:32.314862 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:53:33 crc kubenswrapper[4764]: I0127 07:53:33.330853 4764 generic.go:334] "Generic (PLEG): container finished" podID="d72e70c0-99b2-4446-95f1-302e19669e67" containerID="df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a" exitCode=0 Jan 27 07:53:33 crc kubenswrapper[4764]: I0127 07:53:33.330913 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerDied","Data":"df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a"} Jan 27 07:53:34 crc kubenswrapper[4764]: I0127 07:53:34.343020 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerStarted","Data":"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d"} Jan 27 07:53:34 crc kubenswrapper[4764]: I0127 07:53:34.361572 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsqrf" podStartSLOduration=2.908822617 podStartE2EDuration="4.361552193s" podCreationTimestamp="2026-01-27 07:53:30 +0000 UTC" firstStartedPulling="2026-01-27 07:53:32.314506978 +0000 UTC m=+2224.910129514" lastFinishedPulling="2026-01-27 07:53:33.767236564 +0000 UTC m=+2226.362859090" observedRunningTime="2026-01-27 07:53:34.35883898 +0000 UTC m=+2226.954461516" watchObservedRunningTime="2026-01-27 07:53:34.361552193 +0000 UTC m=+2226.957174719" Jan 27 07:53:40 crc kubenswrapper[4764]: I0127 07:53:40.664738 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:40 crc kubenswrapper[4764]: I0127 07:53:40.665026 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:40 crc kubenswrapper[4764]: I0127 07:53:40.735431 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:41 crc kubenswrapper[4764]: I0127 07:53:41.449850 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:41 crc kubenswrapper[4764]: I0127 07:53:41.507996 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:43 crc kubenswrapper[4764]: I0127 07:53:43.419890 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsqrf" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="registry-server" containerID="cri-o://b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d" gracePeriod=2 Jan 27 07:53:43 crc kubenswrapper[4764]: I0127 07:53:43.930270 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.104684 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities\") pod \"d72e70c0-99b2-4446-95f1-302e19669e67\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.104799 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sn8n\" (UniqueName: \"kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n\") pod \"d72e70c0-99b2-4446-95f1-302e19669e67\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.104865 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content\") pod \"d72e70c0-99b2-4446-95f1-302e19669e67\" (UID: \"d72e70c0-99b2-4446-95f1-302e19669e67\") " Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.113832 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities" (OuterVolumeSpecName: "utilities") pod "d72e70c0-99b2-4446-95f1-302e19669e67" (UID: "d72e70c0-99b2-4446-95f1-302e19669e67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.120795 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n" (OuterVolumeSpecName: "kube-api-access-9sn8n") pod "d72e70c0-99b2-4446-95f1-302e19669e67" (UID: "d72e70c0-99b2-4446-95f1-302e19669e67"). InnerVolumeSpecName "kube-api-access-9sn8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.130123 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d72e70c0-99b2-4446-95f1-302e19669e67" (UID: "d72e70c0-99b2-4446-95f1-302e19669e67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.207902 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.207960 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sn8n\" (UniqueName: \"kubernetes.io/projected/d72e70c0-99b2-4446-95f1-302e19669e67-kube-api-access-9sn8n\") on node \"crc\" DevicePath \"\"" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.207976 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72e70c0-99b2-4446-95f1-302e19669e67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.430859 4764 generic.go:334] "Generic (PLEG): container finished" podID="d72e70c0-99b2-4446-95f1-302e19669e67" containerID="b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d" exitCode=0 Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.430906 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsqrf" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.430934 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerDied","Data":"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d"} Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.431830 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsqrf" event={"ID":"d72e70c0-99b2-4446-95f1-302e19669e67","Type":"ContainerDied","Data":"affec515d14d4d270c1c2bf4f90073b32d2e54651bfe0d35787f01f84ad04444"} Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.431852 4764 scope.go:117] "RemoveContainer" containerID="b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.465626 4764 scope.go:117] "RemoveContainer" containerID="df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.480036 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.492328 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsqrf"] Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.492330 4764 scope.go:117] "RemoveContainer" containerID="6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.534723 4764 scope.go:117] "RemoveContainer" containerID="b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d" Jan 27 07:53:44 crc kubenswrapper[4764]: E0127 07:53:44.535209 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d\": container with ID starting with b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d not found: ID does not exist" containerID="b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.535248 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d"} err="failed to get container status \"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d\": rpc error: code = NotFound desc = could not find container \"b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d\": container with ID starting with b62c459a6c46b4cbab9dcecd69de1a83f7283128ed4c58c3c825959da836af5d not found: ID does not exist" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.535273 4764 scope.go:117] "RemoveContainer" containerID="df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a" Jan 27 07:53:44 crc kubenswrapper[4764]: E0127 07:53:44.536147 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a\": container with ID starting with df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a not found: ID does not exist" containerID="df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.536234 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a"} err="failed to get container status \"df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a\": rpc error: code = NotFound desc = could not find container \"df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a\": container with ID starting with df282cfd5ea0ca78b26358192b04884c6279ce924849d89e1702c45d3f76fe7a not found: ID does not exist" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.536265 4764 scope.go:117] "RemoveContainer" containerID="6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899" Jan 27 07:53:44 crc kubenswrapper[4764]: E0127 07:53:44.536574 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899\": container with ID starting with 6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899 not found: ID does not exist" containerID="6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899" Jan 27 07:53:44 crc kubenswrapper[4764]: I0127 07:53:44.536607 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899"} err="failed to get container status \"6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899\": rpc error: code = NotFound desc = could not find container \"6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899\": container with ID starting with 6d6ea4c90d3ba183ee7350470dc7a95b2d98550e52dd95734a625018ab01f899 not found: ID does not exist" Jan 27 07:53:46 crc kubenswrapper[4764]: I0127 07:53:46.457375 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" path="/var/lib/kubelet/pods/d72e70c0-99b2-4446-95f1-302e19669e67/volumes" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.102069 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:53:48 crc kubenswrapper[4764]: E0127 07:53:48.102917 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="registry-server" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.102935 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="registry-server" Jan 27 07:53:48 crc kubenswrapper[4764]: E0127 07:53:48.102948 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="extract-utilities" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.102956 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="extract-utilities" Jan 27 07:53:48 crc kubenswrapper[4764]: E0127 07:53:48.102970 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="extract-content" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.102978 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="extract-content" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.103211 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72e70c0-99b2-4446-95f1-302e19669e67" containerName="registry-server" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.104891 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.123198 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.289787 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.289922 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.290113 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rljp\" (UniqueName: \"kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.391423 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rljp\" (UniqueName: \"kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.391514 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.391565 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.392241 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.392399 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.409410 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rljp\" (UniqueName: \"kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp\") pod \"community-operators-lldqw\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.431608 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:48 crc kubenswrapper[4764]: I0127 07:53:48.979714 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:53:49 crc kubenswrapper[4764]: I0127 07:53:49.491891 4764 generic.go:334] "Generic (PLEG): container finished" podID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerID="6ea9f1dfcac51afb0dce6a5d4a0cf22923883a941e04554964529880c459fe5c" exitCode=0 Jan 27 07:53:49 crc kubenswrapper[4764]: I0127 07:53:49.492005 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerDied","Data":"6ea9f1dfcac51afb0dce6a5d4a0cf22923883a941e04554964529880c459fe5c"} Jan 27 07:53:49 crc kubenswrapper[4764]: I0127 07:53:49.492246 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerStarted","Data":"418eb5d94c21264e69ae2d6ea543ca77f16e4d6dae9fbfeb24cb78055cfab380"} Jan 27 07:53:50 crc kubenswrapper[4764]: I0127 07:53:50.500706 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerStarted","Data":"26271b992b5667aaa25834e13b31241ab57c838b0e320a71c61ff92bd7001691"} Jan 27 07:53:51 crc kubenswrapper[4764]: I0127 07:53:51.511780 4764 generic.go:334] "Generic (PLEG): container finished" podID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerID="26271b992b5667aaa25834e13b31241ab57c838b0e320a71c61ff92bd7001691" exitCode=0 Jan 27 07:53:51 crc kubenswrapper[4764]: I0127 07:53:51.511864 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerDied","Data":"26271b992b5667aaa25834e13b31241ab57c838b0e320a71c61ff92bd7001691"} Jan 27 07:53:52 crc kubenswrapper[4764]: I0127 07:53:52.522473 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerStarted","Data":"3cf384038ad7c0b109f9db6a11f0c97763f7f9ac88d729dff9fa12089e3823cb"} Jan 27 07:53:52 crc kubenswrapper[4764]: I0127 07:53:52.549215 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lldqw" podStartSLOduration=2.148491422 podStartE2EDuration="4.54919221s" podCreationTimestamp="2026-01-27 07:53:48 +0000 UTC" firstStartedPulling="2026-01-27 07:53:49.494167125 +0000 UTC m=+2242.089789651" lastFinishedPulling="2026-01-27 07:53:51.894867893 +0000 UTC m=+2244.490490439" observedRunningTime="2026-01-27 07:53:52.540872835 +0000 UTC m=+2245.136495381" watchObservedRunningTime="2026-01-27 07:53:52.54919221 +0000 UTC m=+2245.144814736" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.312626 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.316548 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.324752 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.432239 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.432565 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.486379 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.501589 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.501641 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.501674 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zmtv\" (UniqueName: \"kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.603126 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zmtv\" (UniqueName: \"kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.603429 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.603496 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.603942 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.603962 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.633914 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zmtv\" (UniqueName: \"kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv\") pod \"certified-operators-778bl\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.658294 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:53:58 crc kubenswrapper[4764]: I0127 07:53:58.669428 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:53:59 crc kubenswrapper[4764]: I0127 07:53:59.230862 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:53:59 crc kubenswrapper[4764]: I0127 07:53:59.593227 4764 generic.go:334] "Generic (PLEG): container finished" podID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerID="032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd" exitCode=0 Jan 27 07:53:59 crc kubenswrapper[4764]: I0127 07:53:59.593272 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerDied","Data":"032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd"} Jan 27 07:53:59 crc kubenswrapper[4764]: I0127 07:53:59.593663 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerStarted","Data":"644d220f53f23441508e35988ef5004fac5c93a352fda573f2519595a07bccb3"} Jan 27 07:54:00 crc kubenswrapper[4764]: I0127 07:54:00.604937 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerStarted","Data":"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a"} Jan 27 07:54:00 crc kubenswrapper[4764]: I0127 07:54:00.887920 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:54:01 crc kubenswrapper[4764]: I0127 07:54:01.619200 4764 generic.go:334] "Generic (PLEG): container finished" podID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerID="c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a" exitCode=0 Jan 27 07:54:01 crc kubenswrapper[4764]: I0127 07:54:01.619273 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerDied","Data":"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a"} Jan 27 07:54:01 crc kubenswrapper[4764]: I0127 07:54:01.619413 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lldqw" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="registry-server" containerID="cri-o://3cf384038ad7c0b109f9db6a11f0c97763f7f9ac88d729dff9fa12089e3823cb" gracePeriod=2 Jan 27 07:54:02 crc kubenswrapper[4764]: I0127 07:54:02.631287 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerStarted","Data":"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542"} Jan 27 07:54:03 crc kubenswrapper[4764]: I0127 07:54:03.661359 4764 generic.go:334] "Generic (PLEG): container finished" podID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerID="3cf384038ad7c0b109f9db6a11f0c97763f7f9ac88d729dff9fa12089e3823cb" exitCode=0 Jan 27 07:54:03 crc kubenswrapper[4764]: I0127 07:54:03.661480 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerDied","Data":"3cf384038ad7c0b109f9db6a11f0c97763f7f9ac88d729dff9fa12089e3823cb"} Jan 27 07:54:03 crc kubenswrapper[4764]: I0127 07:54:03.853822 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:54:03 crc kubenswrapper[4764]: I0127 07:54:03.871399 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-778bl" podStartSLOduration=3.451909043 podStartE2EDuration="5.871381337s" podCreationTimestamp="2026-01-27 07:53:58 +0000 UTC" firstStartedPulling="2026-01-27 07:53:59.5956861 +0000 UTC m=+2252.191308626" lastFinishedPulling="2026-01-27 07:54:02.015158394 +0000 UTC m=+2254.610780920" observedRunningTime="2026-01-27 07:54:03.689860744 +0000 UTC m=+2256.285483270" watchObservedRunningTime="2026-01-27 07:54:03.871381337 +0000 UTC m=+2256.467003863" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.033084 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities\") pod \"0ba12fb0-50c2-4db4-abee-1da552abc33b\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.033154 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rljp\" (UniqueName: \"kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp\") pod \"0ba12fb0-50c2-4db4-abee-1da552abc33b\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.033415 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content\") pod \"0ba12fb0-50c2-4db4-abee-1da552abc33b\" (UID: \"0ba12fb0-50c2-4db4-abee-1da552abc33b\") " Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.033993 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities" (OuterVolumeSpecName: "utilities") pod "0ba12fb0-50c2-4db4-abee-1da552abc33b" (UID: "0ba12fb0-50c2-4db4-abee-1da552abc33b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.034168 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.038556 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp" (OuterVolumeSpecName: "kube-api-access-5rljp") pod "0ba12fb0-50c2-4db4-abee-1da552abc33b" (UID: "0ba12fb0-50c2-4db4-abee-1da552abc33b"). InnerVolumeSpecName "kube-api-access-5rljp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.086205 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ba12fb0-50c2-4db4-abee-1da552abc33b" (UID: "0ba12fb0-50c2-4db4-abee-1da552abc33b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.135612 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rljp\" (UniqueName: \"kubernetes.io/projected/0ba12fb0-50c2-4db4-abee-1da552abc33b-kube-api-access-5rljp\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.135655 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba12fb0-50c2-4db4-abee-1da552abc33b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.671483 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lldqw" event={"ID":"0ba12fb0-50c2-4db4-abee-1da552abc33b","Type":"ContainerDied","Data":"418eb5d94c21264e69ae2d6ea543ca77f16e4d6dae9fbfeb24cb78055cfab380"} Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.671519 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lldqw" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.671840 4764 scope.go:117] "RemoveContainer" containerID="3cf384038ad7c0b109f9db6a11f0c97763f7f9ac88d729dff9fa12089e3823cb" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.695518 4764 scope.go:117] "RemoveContainer" containerID="26271b992b5667aaa25834e13b31241ab57c838b0e320a71c61ff92bd7001691" Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.697249 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.705933 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lldqw"] Jan 27 07:54:04 crc kubenswrapper[4764]: I0127 07:54:04.713050 4764 scope.go:117] "RemoveContainer" containerID="6ea9f1dfcac51afb0dce6a5d4a0cf22923883a941e04554964529880c459fe5c" Jan 27 07:54:06 crc kubenswrapper[4764]: I0127 07:54:06.451500 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" path="/var/lib/kubelet/pods/0ba12fb0-50c2-4db4-abee-1da552abc33b/volumes" Jan 27 07:54:08 crc kubenswrapper[4764]: I0127 07:54:08.669925 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:08 crc kubenswrapper[4764]: I0127 07:54:08.670337 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:08 crc kubenswrapper[4764]: I0127 07:54:08.723052 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:08 crc kubenswrapper[4764]: I0127 07:54:08.779142 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:12 crc kubenswrapper[4764]: I0127 07:54:12.679184 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:54:12 crc kubenswrapper[4764]: I0127 07:54:12.680270 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-778bl" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="registry-server" containerID="cri-o://2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542" gracePeriod=2 Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.121671 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.312973 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zmtv\" (UniqueName: \"kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv\") pod \"c471428d-6923-49a0-aaa5-ec249fc4dad3\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.313206 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content\") pod \"c471428d-6923-49a0-aaa5-ec249fc4dad3\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.313279 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities\") pod \"c471428d-6923-49a0-aaa5-ec249fc4dad3\" (UID: \"c471428d-6923-49a0-aaa5-ec249fc4dad3\") " Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.314047 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities" (OuterVolumeSpecName: "utilities") pod "c471428d-6923-49a0-aaa5-ec249fc4dad3" (UID: "c471428d-6923-49a0-aaa5-ec249fc4dad3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.318241 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv" (OuterVolumeSpecName: "kube-api-access-8zmtv") pod "c471428d-6923-49a0-aaa5-ec249fc4dad3" (UID: "c471428d-6923-49a0-aaa5-ec249fc4dad3"). InnerVolumeSpecName "kube-api-access-8zmtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.368024 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c471428d-6923-49a0-aaa5-ec249fc4dad3" (UID: "c471428d-6923-49a0-aaa5-ec249fc4dad3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.415324 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.415369 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c471428d-6923-49a0-aaa5-ec249fc4dad3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.415382 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zmtv\" (UniqueName: \"kubernetes.io/projected/c471428d-6923-49a0-aaa5-ec249fc4dad3-kube-api-access-8zmtv\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.755294 4764 generic.go:334] "Generic (PLEG): container finished" podID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerID="2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542" exitCode=0 Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.755351 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerDied","Data":"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542"} Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.755365 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-778bl" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.755411 4764 scope.go:117] "RemoveContainer" containerID="2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.755400 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-778bl" event={"ID":"c471428d-6923-49a0-aaa5-ec249fc4dad3","Type":"ContainerDied","Data":"644d220f53f23441508e35988ef5004fac5c93a352fda573f2519595a07bccb3"} Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.794837 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.799088 4764 scope.go:117] "RemoveContainer" containerID="c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.802261 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-778bl"] Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.837678 4764 scope.go:117] "RemoveContainer" containerID="032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.885458 4764 scope.go:117] "RemoveContainer" containerID="2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542" Jan 27 07:54:13 crc kubenswrapper[4764]: E0127 07:54:13.886092 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542\": container with ID starting with 2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542 not found: ID does not exist" containerID="2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.886148 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542"} err="failed to get container status \"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542\": rpc error: code = NotFound desc = could not find container \"2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542\": container with ID starting with 2a5d38dae8347903f140bf4f4f4946523c1ed188bc8008a61c7d9ba612447542 not found: ID does not exist" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.886182 4764 scope.go:117] "RemoveContainer" containerID="c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a" Jan 27 07:54:13 crc kubenswrapper[4764]: E0127 07:54:13.886751 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a\": container with ID starting with c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a not found: ID does not exist" containerID="c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.886794 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a"} err="failed to get container status \"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a\": rpc error: code = NotFound desc = could not find container \"c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a\": container with ID starting with c9dd17685e3d3e42a84f2530139b155de2f9a1293962c42751a31db1a0d43c2a not found: ID does not exist" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.886824 4764 scope.go:117] "RemoveContainer" containerID="032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd" Jan 27 07:54:13 crc kubenswrapper[4764]: E0127 07:54:13.887223 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd\": container with ID starting with 032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd not found: ID does not exist" containerID="032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd" Jan 27 07:54:13 crc kubenswrapper[4764]: I0127 07:54:13.887252 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd"} err="failed to get container status \"032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd\": rpc error: code = NotFound desc = could not find container \"032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd\": container with ID starting with 032b22f1ffec38477e034374e22cabcf821b25bd229a195400d69f3e265991dd not found: ID does not exist" Jan 27 07:54:14 crc kubenswrapper[4764]: I0127 07:54:14.452697 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" path="/var/lib/kubelet/pods/c471428d-6923-49a0-aaa5-ec249fc4dad3/volumes" Jan 27 07:55:23 crc kubenswrapper[4764]: I0127 07:55:23.762819 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:55:23 crc kubenswrapper[4764]: I0127 07:55:23.763358 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:55:41 crc kubenswrapper[4764]: I0127 07:55:41.586993 4764 generic.go:334] "Generic (PLEG): container finished" podID="0bb121c0-ea4c-49b6-b818-e36e6b657d53" containerID="3e4a3d22e14fde4ae17d15746f0684d708ffe5b53d1c60914dc6949c8e5b503d" exitCode=0 Jan 27 07:55:41 crc kubenswrapper[4764]: I0127 07:55:41.587684 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" event={"ID":"0bb121c0-ea4c-49b6-b818-e36e6b657d53","Type":"ContainerDied","Data":"3e4a3d22e14fde4ae17d15746f0684d708ffe5b53d1c60914dc6949c8e5b503d"} Jan 27 07:55:42 crc kubenswrapper[4764]: I0127 07:55:42.957511 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.029141 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0\") pod \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.029200 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle\") pod \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.029246 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory\") pod \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.029273 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam\") pod \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.029348 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s98sk\" (UniqueName: \"kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk\") pod \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\" (UID: \"0bb121c0-ea4c-49b6-b818-e36e6b657d53\") " Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.052782 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0bb121c0-ea4c-49b6-b818-e36e6b657d53" (UID: "0bb121c0-ea4c-49b6-b818-e36e6b657d53"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.070741 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk" (OuterVolumeSpecName: "kube-api-access-s98sk") pod "0bb121c0-ea4c-49b6-b818-e36e6b657d53" (UID: "0bb121c0-ea4c-49b6-b818-e36e6b657d53"). InnerVolumeSpecName "kube-api-access-s98sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.120692 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory" (OuterVolumeSpecName: "inventory") pod "0bb121c0-ea4c-49b6-b818-e36e6b657d53" (UID: "0bb121c0-ea4c-49b6-b818-e36e6b657d53"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.135271 4764 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.135312 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.135323 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s98sk\" (UniqueName: \"kubernetes.io/projected/0bb121c0-ea4c-49b6-b818-e36e6b657d53-kube-api-access-s98sk\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.135922 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0bb121c0-ea4c-49b6-b818-e36e6b657d53" (UID: "0bb121c0-ea4c-49b6-b818-e36e6b657d53"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.138206 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0bb121c0-ea4c-49b6-b818-e36e6b657d53" (UID: "0bb121c0-ea4c-49b6-b818-e36e6b657d53"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.236995 4764 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.237033 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb121c0-ea4c-49b6-b818-e36e6b657d53-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.608028 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" event={"ID":"0bb121c0-ea4c-49b6-b818-e36e6b657d53","Type":"ContainerDied","Data":"c0cc1d3770fb7d6e2542d74b44be5a48e7a4212a29ac15bc64bfd8e664dcf83e"} Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.608262 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0cc1d3770fb7d6e2542d74b44be5a48e7a4212a29ac15bc64bfd8e664dcf83e" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.608325 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4js7l" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.713723 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn"] Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714535 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714553 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714574 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="extract-content" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714583 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="extract-content" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714609 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="extract-utilities" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714664 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="extract-utilities" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714684 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb121c0-ea4c-49b6-b818-e36e6b657d53" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714694 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb121c0-ea4c-49b6-b818-e36e6b657d53" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714718 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="extract-utilities" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714729 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="extract-utilities" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714747 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="extract-content" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714757 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="extract-content" Jan 27 07:55:43 crc kubenswrapper[4764]: E0127 07:55:43.714781 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.714790 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.715075 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb121c0-ea4c-49b6-b818-e36e6b657d53" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.715106 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="c471428d-6923-49a0-aaa5-ec249fc4dad3" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.715132 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba12fb0-50c2-4db4-abee-1da552abc33b" containerName="registry-server" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.716071 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.719799 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720045 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720171 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720459 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720513 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720522 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.720766 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.731905 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn"] Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850503 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850564 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850591 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850625 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850684 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850715 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4p4\" (UniqueName: \"kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850735 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850771 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.850793 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952304 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952381 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952459 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952542 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952590 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4p4\" (UniqueName: \"kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952627 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952686 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952725 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.952778 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.954658 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.957716 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.958124 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.959096 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.959144 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.960571 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.960845 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.960971 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:43 crc kubenswrapper[4764]: I0127 07:55:43.971380 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4p4\" (UniqueName: \"kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qfnxn\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:44 crc kubenswrapper[4764]: I0127 07:55:44.042647 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:55:44 crc kubenswrapper[4764]: I0127 07:55:44.585273 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn"] Jan 27 07:55:44 crc kubenswrapper[4764]: I0127 07:55:44.620595 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" event={"ID":"348551f5-8fe7-4ca4-a294-ef2587ea3928","Type":"ContainerStarted","Data":"531a029c61277cda5edc7891a4a200943ad46c6a61c71c00d534af33c4d5a5b8"} Jan 27 07:55:45 crc kubenswrapper[4764]: I0127 07:55:45.631006 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" event={"ID":"348551f5-8fe7-4ca4-a294-ef2587ea3928","Type":"ContainerStarted","Data":"8ed897d5c52d30f4ffc213bee96dea98a78684de1efbe2f08c2260213ab119dd"} Jan 27 07:55:45 crc kubenswrapper[4764]: I0127 07:55:45.660856 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" podStartSLOduration=2.062565728 podStartE2EDuration="2.660840493s" podCreationTimestamp="2026-01-27 07:55:43 +0000 UTC" firstStartedPulling="2026-01-27 07:55:44.59110108 +0000 UTC m=+2357.186723616" lastFinishedPulling="2026-01-27 07:55:45.189375855 +0000 UTC m=+2357.784998381" observedRunningTime="2026-01-27 07:55:45.65852229 +0000 UTC m=+2358.254144816" watchObservedRunningTime="2026-01-27 07:55:45.660840493 +0000 UTC m=+2358.256463019" Jan 27 07:55:53 crc kubenswrapper[4764]: I0127 07:55:53.762300 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:55:53 crc kubenswrapper[4764]: I0127 07:55:53.762991 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.763004 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.763485 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.763537 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.764283 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.764355 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" gracePeriod=600 Jan 27 07:56:23 crc kubenswrapper[4764]: E0127 07:56:23.959858 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.980799 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" exitCode=0 Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.980842 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a"} Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.980879 4764 scope.go:117] "RemoveContainer" containerID="1b92b7516385baca0d4396dbba49c5900e0bc01a6b969ff7efacd572bb1ac811" Jan 27 07:56:23 crc kubenswrapper[4764]: I0127 07:56:23.981551 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:56:23 crc kubenswrapper[4764]: E0127 07:56:23.981845 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:56:36 crc kubenswrapper[4764]: I0127 07:56:36.439407 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:56:36 crc kubenswrapper[4764]: E0127 07:56:36.440918 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:56:51 crc kubenswrapper[4764]: I0127 07:56:51.438362 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:56:51 crc kubenswrapper[4764]: E0127 07:56:51.439135 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:57:06 crc kubenswrapper[4764]: I0127 07:57:06.438544 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:57:06 crc kubenswrapper[4764]: E0127 07:57:06.439537 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:57:17 crc kubenswrapper[4764]: I0127 07:57:17.439123 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:57:17 crc kubenswrapper[4764]: E0127 07:57:17.440485 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:57:29 crc kubenswrapper[4764]: I0127 07:57:29.438683 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:57:29 crc kubenswrapper[4764]: E0127 07:57:29.439482 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:57:42 crc kubenswrapper[4764]: I0127 07:57:42.441215 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:57:42 crc kubenswrapper[4764]: E0127 07:57:42.442076 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:57:56 crc kubenswrapper[4764]: I0127 07:57:56.438777 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:57:56 crc kubenswrapper[4764]: E0127 07:57:56.439579 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:58:06 crc kubenswrapper[4764]: I0127 07:58:06.912144 4764 generic.go:334] "Generic (PLEG): container finished" podID="348551f5-8fe7-4ca4-a294-ef2587ea3928" containerID="8ed897d5c52d30f4ffc213bee96dea98a78684de1efbe2f08c2260213ab119dd" exitCode=0 Jan 27 07:58:06 crc kubenswrapper[4764]: I0127 07:58:06.912232 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" event={"ID":"348551f5-8fe7-4ca4-a294-ef2587ea3928","Type":"ContainerDied","Data":"8ed897d5c52d30f4ffc213bee96dea98a78684de1efbe2f08c2260213ab119dd"} Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.340600 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402304 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402741 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402796 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402859 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402916 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.402962 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.403049 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.403528 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.403603 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl4p4\" (UniqueName: \"kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4\") pod \"348551f5-8fe7-4ca4-a294-ef2587ea3928\" (UID: \"348551f5-8fe7-4ca4-a294-ef2587ea3928\") " Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.411742 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4" (OuterVolumeSpecName: "kube-api-access-zl4p4") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "kube-api-access-zl4p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.415684 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.431824 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.433912 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory" (OuterVolumeSpecName: "inventory") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.435034 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.435228 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.441064 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.444107 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.445476 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "348551f5-8fe7-4ca4-a294-ef2587ea3928" (UID: "348551f5-8fe7-4ca4-a294-ef2587ea3928"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505821 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505852 4764 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505861 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl4p4\" (UniqueName: \"kubernetes.io/projected/348551f5-8fe7-4ca4-a294-ef2587ea3928-kube-api-access-zl4p4\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505870 4764 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505879 4764 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505887 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505898 4764 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505906 4764 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.505914 4764 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/348551f5-8fe7-4ca4-a294-ef2587ea3928-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.927054 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" event={"ID":"348551f5-8fe7-4ca4-a294-ef2587ea3928","Type":"ContainerDied","Data":"531a029c61277cda5edc7891a4a200943ad46c6a61c71c00d534af33c4d5a5b8"} Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.927249 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531a029c61277cda5edc7891a4a200943ad46c6a61c71c00d534af33c4d5a5b8" Jan 27 07:58:08 crc kubenswrapper[4764]: I0127 07:58:08.927106 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qfnxn" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.033202 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4"] Jan 27 07:58:09 crc kubenswrapper[4764]: E0127 07:58:09.033585 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348551f5-8fe7-4ca4-a294-ef2587ea3928" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.033607 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="348551f5-8fe7-4ca4-a294-ef2587ea3928" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.033799 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="348551f5-8fe7-4ca4-a294-ef2587ea3928" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.034390 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.036346 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.036483 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.036649 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wbnxl" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.036778 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.036347 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.060151 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4"] Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.216583 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpp2\" (UniqueName: \"kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.216655 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.216784 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.216840 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.216902 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.217028 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.217072 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318257 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318303 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318397 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhpp2\" (UniqueName: \"kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318419 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318467 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318490 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.318529 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.322712 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.323069 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.323069 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.323525 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.323625 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.324028 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.335950 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhpp2\" (UniqueName: \"kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.349869 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.894046 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4"] Jan 27 07:58:09 crc kubenswrapper[4764]: I0127 07:58:09.935129 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" event={"ID":"9e594ddd-3885-411e-8728-488126ab67b2","Type":"ContainerStarted","Data":"e41ea0823d5fd79120f710f01828a2c4f5b5ecb53190ccf40b5d095d49e8e1f5"} Jan 27 07:58:10 crc kubenswrapper[4764]: I0127 07:58:10.438561 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:58:10 crc kubenswrapper[4764]: E0127 07:58:10.439212 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:58:10 crc kubenswrapper[4764]: I0127 07:58:10.945650 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" event={"ID":"9e594ddd-3885-411e-8728-488126ab67b2","Type":"ContainerStarted","Data":"6476e975e25b4ae5dbe3e2767e93e421942788aec20e210053ff6b7a84381bbf"} Jan 27 07:58:10 crc kubenswrapper[4764]: I0127 07:58:10.980532 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" podStartSLOduration=1.474034562 podStartE2EDuration="1.980505737s" podCreationTimestamp="2026-01-27 07:58:09 +0000 UTC" firstStartedPulling="2026-01-27 07:58:09.906415793 +0000 UTC m=+2502.502038319" lastFinishedPulling="2026-01-27 07:58:10.412886968 +0000 UTC m=+2503.008509494" observedRunningTime="2026-01-27 07:58:10.969292504 +0000 UTC m=+2503.564915060" watchObservedRunningTime="2026-01-27 07:58:10.980505737 +0000 UTC m=+2503.576128303" Jan 27 07:58:24 crc kubenswrapper[4764]: I0127 07:58:24.438827 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:58:24 crc kubenswrapper[4764]: E0127 07:58:24.439794 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:58:39 crc kubenswrapper[4764]: I0127 07:58:39.438519 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:58:39 crc kubenswrapper[4764]: E0127 07:58:39.439293 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:58:52 crc kubenswrapper[4764]: I0127 07:58:52.438599 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:58:52 crc kubenswrapper[4764]: E0127 07:58:52.439431 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:59:06 crc kubenswrapper[4764]: I0127 07:59:06.438080 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:59:06 crc kubenswrapper[4764]: E0127 07:59:06.438775 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:59:17 crc kubenswrapper[4764]: I0127 07:59:17.438543 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:59:17 crc kubenswrapper[4764]: E0127 07:59:17.439292 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:59:28 crc kubenswrapper[4764]: I0127 07:59:28.451599 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:59:28 crc kubenswrapper[4764]: E0127 07:59:28.453937 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:59:43 crc kubenswrapper[4764]: I0127 07:59:43.440428 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:59:43 crc kubenswrapper[4764]: E0127 07:59:43.441394 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 07:59:54 crc kubenswrapper[4764]: I0127 07:59:54.438926 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 07:59:54 crc kubenswrapper[4764]: E0127 07:59:54.439671 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.157149 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz"] Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.160409 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.163308 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.164565 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.175142 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz"] Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.261807 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.261959 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz56x\" (UniqueName: \"kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.261995 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.364066 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.364363 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz56x\" (UniqueName: \"kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.364425 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.365087 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.371248 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.381692 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz56x\" (UniqueName: \"kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x\") pod \"collect-profiles-29491680-jdbcz\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.483892 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:00 crc kubenswrapper[4764]: I0127 08:00:00.936769 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz"] Jan 27 08:00:01 crc kubenswrapper[4764]: I0127 08:00:01.858735 4764 generic.go:334] "Generic (PLEG): container finished" podID="ccb3c800-e2a7-4381-9680-865dd088feac" containerID="b17deb7aba60fc095233fe34735e6842766fd538687988dc03d867dd64207790" exitCode=0 Jan 27 08:00:01 crc kubenswrapper[4764]: I0127 08:00:01.858994 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" event={"ID":"ccb3c800-e2a7-4381-9680-865dd088feac","Type":"ContainerDied","Data":"b17deb7aba60fc095233fe34735e6842766fd538687988dc03d867dd64207790"} Jan 27 08:00:01 crc kubenswrapper[4764]: I0127 08:00:01.859021 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" event={"ID":"ccb3c800-e2a7-4381-9680-865dd088feac","Type":"ContainerStarted","Data":"bdf7dd4383516856a549feb3fc24cdb788ff02d7324658355760d4777867994c"} Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.167363 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.226365 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume\") pod \"ccb3c800-e2a7-4381-9680-865dd088feac\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.226769 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume\") pod \"ccb3c800-e2a7-4381-9680-865dd088feac\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.227367 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccb3c800-e2a7-4381-9680-865dd088feac" (UID: "ccb3c800-e2a7-4381-9680-865dd088feac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.227884 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz56x\" (UniqueName: \"kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x\") pod \"ccb3c800-e2a7-4381-9680-865dd088feac\" (UID: \"ccb3c800-e2a7-4381-9680-865dd088feac\") " Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.228617 4764 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb3c800-e2a7-4381-9680-865dd088feac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.232840 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x" (OuterVolumeSpecName: "kube-api-access-pz56x") pod "ccb3c800-e2a7-4381-9680-865dd088feac" (UID: "ccb3c800-e2a7-4381-9680-865dd088feac"). InnerVolumeSpecName "kube-api-access-pz56x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.233215 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccb3c800-e2a7-4381-9680-865dd088feac" (UID: "ccb3c800-e2a7-4381-9680-865dd088feac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.330122 4764 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb3c800-e2a7-4381-9680-865dd088feac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.330158 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz56x\" (UniqueName: \"kubernetes.io/projected/ccb3c800-e2a7-4381-9680-865dd088feac-kube-api-access-pz56x\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.894791 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" event={"ID":"ccb3c800-e2a7-4381-9680-865dd088feac","Type":"ContainerDied","Data":"bdf7dd4383516856a549feb3fc24cdb788ff02d7324658355760d4777867994c"} Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.895351 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-jdbcz" Jan 27 08:00:03 crc kubenswrapper[4764]: I0127 08:00:03.895487 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf7dd4383516856a549feb3fc24cdb788ff02d7324658355760d4777867994c" Jan 27 08:00:04 crc kubenswrapper[4764]: I0127 08:00:04.260714 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522"] Jan 27 08:00:04 crc kubenswrapper[4764]: I0127 08:00:04.274101 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-pg522"] Jan 27 08:00:04 crc kubenswrapper[4764]: I0127 08:00:04.448192 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c388c4-2071-4b3b-97b6-52aec664b967" path="/var/lib/kubelet/pods/b4c388c4-2071-4b3b-97b6-52aec664b967/volumes" Jan 27 08:00:09 crc kubenswrapper[4764]: I0127 08:00:09.437847 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:00:09 crc kubenswrapper[4764]: E0127 08:00:09.438766 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.063613 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:12 crc kubenswrapper[4764]: E0127 08:00:12.064281 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb3c800-e2a7-4381-9680-865dd088feac" containerName="collect-profiles" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.064516 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb3c800-e2a7-4381-9680-865dd088feac" containerName="collect-profiles" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.064702 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb3c800-e2a7-4381-9680-865dd088feac" containerName="collect-profiles" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.067121 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.075243 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.198993 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.200125 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjsw\" (UniqueName: \"kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.200305 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.302428 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjsw\" (UniqueName: \"kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.302794 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.302991 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.303392 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.303468 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.325361 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjsw\" (UniqueName: \"kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw\") pod \"redhat-operators-dkxxt\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.385530 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.870414 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:12 crc kubenswrapper[4764]: I0127 08:00:12.996570 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerStarted","Data":"3dc944334f3bfdeb72d5e565122f24409974494f55832bd106c7d90b66551dae"} Jan 27 08:00:14 crc kubenswrapper[4764]: I0127 08:00:14.005799 4764 generic.go:334] "Generic (PLEG): container finished" podID="a329e9ac-b890-40cf-979d-ddc48164d101" containerID="f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9" exitCode=0 Jan 27 08:00:14 crc kubenswrapper[4764]: I0127 08:00:14.005867 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerDied","Data":"f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9"} Jan 27 08:00:14 crc kubenswrapper[4764]: I0127 08:00:14.008203 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:00:16 crc kubenswrapper[4764]: I0127 08:00:16.025710 4764 generic.go:334] "Generic (PLEG): container finished" podID="a329e9ac-b890-40cf-979d-ddc48164d101" containerID="18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8" exitCode=0 Jan 27 08:00:16 crc kubenswrapper[4764]: I0127 08:00:16.025757 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerDied","Data":"18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8"} Jan 27 08:00:17 crc kubenswrapper[4764]: I0127 08:00:17.037783 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerStarted","Data":"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9"} Jan 27 08:00:17 crc kubenswrapper[4764]: I0127 08:00:17.067259 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dkxxt" podStartSLOduration=2.501528961 podStartE2EDuration="5.067235799s" podCreationTimestamp="2026-01-27 08:00:12 +0000 UTC" firstStartedPulling="2026-01-27 08:00:14.007913895 +0000 UTC m=+2626.603536421" lastFinishedPulling="2026-01-27 08:00:16.573620733 +0000 UTC m=+2629.169243259" observedRunningTime="2026-01-27 08:00:17.057041584 +0000 UTC m=+2629.652664110" watchObservedRunningTime="2026-01-27 08:00:17.067235799 +0000 UTC m=+2629.662858315" Jan 27 08:00:22 crc kubenswrapper[4764]: I0127 08:00:22.386084 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:22 crc kubenswrapper[4764]: I0127 08:00:22.386674 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:23 crc kubenswrapper[4764]: I0127 08:00:23.430733 4764 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dkxxt" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="registry-server" probeResult="failure" output=< Jan 27 08:00:23 crc kubenswrapper[4764]: timeout: failed to connect service ":50051" within 1s Jan 27 08:00:23 crc kubenswrapper[4764]: > Jan 27 08:00:23 crc kubenswrapper[4764]: I0127 08:00:23.438967 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:00:23 crc kubenswrapper[4764]: E0127 08:00:23.439368 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:00:32 crc kubenswrapper[4764]: I0127 08:00:32.436754 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:32 crc kubenswrapper[4764]: I0127 08:00:32.485793 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:32 crc kubenswrapper[4764]: I0127 08:00:32.675015 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.198828 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dkxxt" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="registry-server" containerID="cri-o://0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9" gracePeriod=2 Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.642221 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.771689 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities\") pod \"a329e9ac-b890-40cf-979d-ddc48164d101\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.771815 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbjsw\" (UniqueName: \"kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw\") pod \"a329e9ac-b890-40cf-979d-ddc48164d101\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.771892 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content\") pod \"a329e9ac-b890-40cf-979d-ddc48164d101\" (UID: \"a329e9ac-b890-40cf-979d-ddc48164d101\") " Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.772711 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities" (OuterVolumeSpecName: "utilities") pod "a329e9ac-b890-40cf-979d-ddc48164d101" (UID: "a329e9ac-b890-40cf-979d-ddc48164d101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.777878 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw" (OuterVolumeSpecName: "kube-api-access-rbjsw") pod "a329e9ac-b890-40cf-979d-ddc48164d101" (UID: "a329e9ac-b890-40cf-979d-ddc48164d101"). InnerVolumeSpecName "kube-api-access-rbjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.874555 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.874590 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbjsw\" (UniqueName: \"kubernetes.io/projected/a329e9ac-b890-40cf-979d-ddc48164d101-kube-api-access-rbjsw\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.899462 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a329e9ac-b890-40cf-979d-ddc48164d101" (UID: "a329e9ac-b890-40cf-979d-ddc48164d101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:34 crc kubenswrapper[4764]: I0127 08:00:34.976398 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a329e9ac-b890-40cf-979d-ddc48164d101-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.215248 4764 generic.go:334] "Generic (PLEG): container finished" podID="a329e9ac-b890-40cf-979d-ddc48164d101" containerID="0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9" exitCode=0 Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.215545 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerDied","Data":"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9"} Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.216065 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkxxt" event={"ID":"a329e9ac-b890-40cf-979d-ddc48164d101","Type":"ContainerDied","Data":"3dc944334f3bfdeb72d5e565122f24409974494f55832bd106c7d90b66551dae"} Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.216111 4764 scope.go:117] "RemoveContainer" containerID="0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.215659 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkxxt" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.242554 4764 scope.go:117] "RemoveContainer" containerID="18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.259747 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.271601 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dkxxt"] Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.283171 4764 scope.go:117] "RemoveContainer" containerID="f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.319477 4764 scope.go:117] "RemoveContainer" containerID="0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9" Jan 27 08:00:35 crc kubenswrapper[4764]: E0127 08:00:35.320186 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9\": container with ID starting with 0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9 not found: ID does not exist" containerID="0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.320268 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9"} err="failed to get container status \"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9\": rpc error: code = NotFound desc = could not find container \"0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9\": container with ID starting with 0b3e78009de28c28d0a3fbe6cb504eef9b16c50611e343a1e275132a2378ffb9 not found: ID does not exist" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.320334 4764 scope.go:117] "RemoveContainer" containerID="18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8" Jan 27 08:00:35 crc kubenswrapper[4764]: E0127 08:00:35.320976 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8\": container with ID starting with 18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8 not found: ID does not exist" containerID="18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.321009 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8"} err="failed to get container status \"18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8\": rpc error: code = NotFound desc = could not find container \"18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8\": container with ID starting with 18ab4376e2f7b0986193bc9b16aadf2ec3bd1a6b7bf79185615d73e3acf662b8 not found: ID does not exist" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.321053 4764 scope.go:117] "RemoveContainer" containerID="f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9" Jan 27 08:00:35 crc kubenswrapper[4764]: E0127 08:00:35.321414 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9\": container with ID starting with f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9 not found: ID does not exist" containerID="f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.321475 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9"} err="failed to get container status \"f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9\": rpc error: code = NotFound desc = could not find container \"f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9\": container with ID starting with f158122e67ff81c4dc50edaf060abd8fc73fb50fd8edc87af78d50a7efa7cdf9 not found: ID does not exist" Jan 27 08:00:35 crc kubenswrapper[4764]: I0127 08:00:35.439088 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:00:35 crc kubenswrapper[4764]: E0127 08:00:35.439389 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:00:36 crc kubenswrapper[4764]: I0127 08:00:36.448622 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" path="/var/lib/kubelet/pods/a329e9ac-b890-40cf-979d-ddc48164d101/volumes" Jan 27 08:00:37 crc kubenswrapper[4764]: I0127 08:00:37.247877 4764 generic.go:334] "Generic (PLEG): container finished" podID="9e594ddd-3885-411e-8728-488126ab67b2" containerID="6476e975e25b4ae5dbe3e2767e93e421942788aec20e210053ff6b7a84381bbf" exitCode=0 Jan 27 08:00:37 crc kubenswrapper[4764]: I0127 08:00:37.247947 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" event={"ID":"9e594ddd-3885-411e-8728-488126ab67b2","Type":"ContainerDied","Data":"6476e975e25b4ae5dbe3e2767e93e421942788aec20e210053ff6b7a84381bbf"} Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.673238 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.764898 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.764962 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.765053 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.765183 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.765251 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhpp2\" (UniqueName: \"kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.765270 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.765302 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle\") pod \"9e594ddd-3885-411e-8728-488126ab67b2\" (UID: \"9e594ddd-3885-411e-8728-488126ab67b2\") " Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.789177 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.809646 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2" (OuterVolumeSpecName: "kube-api-access-vhpp2") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "kube-api-access-vhpp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.868319 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhpp2\" (UniqueName: \"kubernetes.io/projected/9e594ddd-3885-411e-8728-488126ab67b2-kube-api-access-vhpp2\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.868345 4764 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.886485 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.892227 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.907629 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.920798 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory" (OuterVolumeSpecName: "inventory") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.933669 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9e594ddd-3885-411e-8728-488126ab67b2" (UID: "9e594ddd-3885-411e-8728-488126ab67b2"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.970508 4764 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.970765 4764 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.970855 4764 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.970966 4764 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:38 crc kubenswrapper[4764]: I0127 08:00:38.971050 4764 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e594ddd-3885-411e-8728-488126ab67b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:39 crc kubenswrapper[4764]: I0127 08:00:39.268675 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" event={"ID":"9e594ddd-3885-411e-8728-488126ab67b2","Type":"ContainerDied","Data":"e41ea0823d5fd79120f710f01828a2c4f5b5ecb53190ccf40b5d095d49e8e1f5"} Jan 27 08:00:39 crc kubenswrapper[4764]: I0127 08:00:39.269001 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41ea0823d5fd79120f710f01828a2c4f5b5ecb53190ccf40b5d095d49e8e1f5" Jan 27 08:00:39 crc kubenswrapper[4764]: I0127 08:00:39.268795 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4" Jan 27 08:00:43 crc kubenswrapper[4764]: I0127 08:00:43.837834 4764 scope.go:117] "RemoveContainer" containerID="e9e7949a9102249840104a22988ccdbafd29b2a1dda0d3d687a8a7ba89386e89" Jan 27 08:00:50 crc kubenswrapper[4764]: I0127 08:00:50.438925 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:00:50 crc kubenswrapper[4764]: E0127 08:00:50.440271 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.148224 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29491681-5cn85"] Jan 27 08:01:00 crc kubenswrapper[4764]: E0127 08:01:00.149319 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e594ddd-3885-411e-8728-488126ab67b2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149339 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e594ddd-3885-411e-8728-488126ab67b2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 08:01:00 crc kubenswrapper[4764]: E0127 08:01:00.149373 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="registry-server" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149382 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="registry-server" Jan 27 08:01:00 crc kubenswrapper[4764]: E0127 08:01:00.149403 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="extract-content" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149411 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="extract-content" Jan 27 08:01:00 crc kubenswrapper[4764]: E0127 08:01:00.149424 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="extract-utilities" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149432 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="extract-utilities" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149665 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a329e9ac-b890-40cf-979d-ddc48164d101" containerName="registry-server" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.149690 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e594ddd-3885-411e-8728-488126ab67b2" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.150500 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.157250 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29491681-5cn85"] Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.189238 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.189580 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.189668 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.189753 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkpg\" (UniqueName: \"kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.290472 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.290602 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.290628 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.290664 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkpg\" (UniqueName: \"kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.297236 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.298048 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.298261 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.309260 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkpg\" (UniqueName: \"kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg\") pod \"keystone-cron-29491681-5cn85\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.479893 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:00 crc kubenswrapper[4764]: I0127 08:01:00.943352 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29491681-5cn85"] Jan 27 08:01:01 crc kubenswrapper[4764]: I0127 08:01:01.438005 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:01:01 crc kubenswrapper[4764]: E0127 08:01:01.438611 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:01:01 crc kubenswrapper[4764]: I0127 08:01:01.472021 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491681-5cn85" event={"ID":"8c44c98d-fecd-446f-99b1-bef1034340b6","Type":"ContainerStarted","Data":"3a336200e616f1ab51711f20095d11e8e11d6b10832e08b1de88e0ceae8d4324"} Jan 27 08:01:01 crc kubenswrapper[4764]: I0127 08:01:01.472067 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491681-5cn85" event={"ID":"8c44c98d-fecd-446f-99b1-bef1034340b6","Type":"ContainerStarted","Data":"fa12b0fa46b3faf21f6e1720b92a1d1073e000776d49f40ec16247a810084b6c"} Jan 27 08:01:03 crc kubenswrapper[4764]: I0127 08:01:03.489895 4764 generic.go:334] "Generic (PLEG): container finished" podID="8c44c98d-fecd-446f-99b1-bef1034340b6" containerID="3a336200e616f1ab51711f20095d11e8e11d6b10832e08b1de88e0ceae8d4324" exitCode=0 Jan 27 08:01:03 crc kubenswrapper[4764]: I0127 08:01:03.489939 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491681-5cn85" event={"ID":"8c44c98d-fecd-446f-99b1-bef1034340b6","Type":"ContainerDied","Data":"3a336200e616f1ab51711f20095d11e8e11d6b10832e08b1de88e0ceae8d4324"} Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.855129 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.979377 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vkpg\" (UniqueName: \"kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg\") pod \"8c44c98d-fecd-446f-99b1-bef1034340b6\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.979552 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data\") pod \"8c44c98d-fecd-446f-99b1-bef1034340b6\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.979666 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys\") pod \"8c44c98d-fecd-446f-99b1-bef1034340b6\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.979806 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle\") pod \"8c44c98d-fecd-446f-99b1-bef1034340b6\" (UID: \"8c44c98d-fecd-446f-99b1-bef1034340b6\") " Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.984747 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg" (OuterVolumeSpecName: "kube-api-access-7vkpg") pod "8c44c98d-fecd-446f-99b1-bef1034340b6" (UID: "8c44c98d-fecd-446f-99b1-bef1034340b6"). InnerVolumeSpecName "kube-api-access-7vkpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:01:04 crc kubenswrapper[4764]: I0127 08:01:04.985586 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8c44c98d-fecd-446f-99b1-bef1034340b6" (UID: "8c44c98d-fecd-446f-99b1-bef1034340b6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.014814 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c44c98d-fecd-446f-99b1-bef1034340b6" (UID: "8c44c98d-fecd-446f-99b1-bef1034340b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.030455 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data" (OuterVolumeSpecName: "config-data") pod "8c44c98d-fecd-446f-99b1-bef1034340b6" (UID: "8c44c98d-fecd-446f-99b1-bef1034340b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.082946 4764 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.083012 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vkpg\" (UniqueName: \"kubernetes.io/projected/8c44c98d-fecd-446f-99b1-bef1034340b6-kube-api-access-7vkpg\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.083039 4764 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.083052 4764 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c44c98d-fecd-446f-99b1-bef1034340b6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.538993 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491681-5cn85" event={"ID":"8c44c98d-fecd-446f-99b1-bef1034340b6","Type":"ContainerDied","Data":"fa12b0fa46b3faf21f6e1720b92a1d1073e000776d49f40ec16247a810084b6c"} Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.539281 4764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa12b0fa46b3faf21f6e1720b92a1d1073e000776d49f40ec16247a810084b6c" Jan 27 08:01:05 crc kubenswrapper[4764]: I0127 08:01:05.539224 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491681-5cn85" Jan 27 08:01:15 crc kubenswrapper[4764]: I0127 08:01:15.438785 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:01:15 crc kubenswrapper[4764]: E0127 08:01:15.439702 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:01:29 crc kubenswrapper[4764]: I0127 08:01:29.438778 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:01:29 crc kubenswrapper[4764]: I0127 08:01:29.778382 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7"} Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.424742 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kxdq4/must-gather-ff6nd"] Jan 27 08:03:04 crc kubenswrapper[4764]: E0127 08:03:04.425644 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c44c98d-fecd-446f-99b1-bef1034340b6" containerName="keystone-cron" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.425655 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c44c98d-fecd-446f-99b1-bef1034340b6" containerName="keystone-cron" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.425823 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c44c98d-fecd-446f-99b1-bef1034340b6" containerName="keystone-cron" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.426732 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.429243 4764 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kxdq4"/"default-dockercfg-z8kxg" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.430699 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kxdq4"/"openshift-service-ca.crt" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.431070 4764 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kxdq4"/"kube-root-ca.crt" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.458330 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kxdq4/must-gather-ff6nd"] Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.568559 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zsl\" (UniqueName: \"kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.568616 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.669869 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zsl\" (UniqueName: \"kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.670132 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.670600 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.696924 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zsl\" (UniqueName: \"kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl\") pod \"must-gather-ff6nd\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:04 crc kubenswrapper[4764]: I0127 08:03:04.748883 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:03:05 crc kubenswrapper[4764]: I0127 08:03:05.236230 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kxdq4/must-gather-ff6nd"] Jan 27 08:03:05 crc kubenswrapper[4764]: I0127 08:03:05.657875 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" event={"ID":"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2","Type":"ContainerStarted","Data":"a8f55dee719e849d8070ba1c488b3df76469171d96a60d1723414429f2cfda19"} Jan 27 08:03:12 crc kubenswrapper[4764]: I0127 08:03:12.733401 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" event={"ID":"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2","Type":"ContainerStarted","Data":"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7"} Jan 27 08:03:13 crc kubenswrapper[4764]: I0127 08:03:13.752648 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" event={"ID":"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2","Type":"ContainerStarted","Data":"95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea"} Jan 27 08:03:13 crc kubenswrapper[4764]: I0127 08:03:13.778682 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" podStartSLOduration=2.528620545 podStartE2EDuration="9.778664278s" podCreationTimestamp="2026-01-27 08:03:04 +0000 UTC" firstStartedPulling="2026-01-27 08:03:05.247822012 +0000 UTC m=+2797.843444538" lastFinishedPulling="2026-01-27 08:03:12.497865745 +0000 UTC m=+2805.093488271" observedRunningTime="2026-01-27 08:03:13.770383904 +0000 UTC m=+2806.366006470" watchObservedRunningTime="2026-01-27 08:03:13.778664278 +0000 UTC m=+2806.374286804" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.141976 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-sl54b"] Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.143868 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.205395 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqbs\" (UniqueName: \"kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.205499 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.307693 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqbs\" (UniqueName: \"kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.307758 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.307945 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.326704 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqbs\" (UniqueName: \"kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs\") pod \"crc-debug-sl54b\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.461481 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:16 crc kubenswrapper[4764]: W0127 08:03:16.493821 4764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a92ca7_f30b_4ba1_a922_90f1afb31e83.slice/crio-ed359b896a368885ad61f0cf181567c0f6a575a9d0f4e58bb1b272cf578c4430 WatchSource:0}: Error finding container ed359b896a368885ad61f0cf181567c0f6a575a9d0f4e58bb1b272cf578c4430: Status 404 returned error can't find the container with id ed359b896a368885ad61f0cf181567c0f6a575a9d0f4e58bb1b272cf578c4430 Jan 27 08:03:16 crc kubenswrapper[4764]: I0127 08:03:16.778419 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" event={"ID":"63a92ca7-f30b-4ba1-a922-90f1afb31e83","Type":"ContainerStarted","Data":"ed359b896a368885ad61f0cf181567c0f6a575a9d0f4e58bb1b272cf578c4430"} Jan 27 08:03:29 crc kubenswrapper[4764]: I0127 08:03:29.903760 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" event={"ID":"63a92ca7-f30b-4ba1-a922-90f1afb31e83","Type":"ContainerStarted","Data":"e4c6faedca05e667cd8c0a2979de6f44dc945bc1fdfd87700fcad0cfb575590f"} Jan 27 08:03:29 crc kubenswrapper[4764]: I0127 08:03:29.929157 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" podStartSLOduration=1.671324246 podStartE2EDuration="13.929138261s" podCreationTimestamp="2026-01-27 08:03:16 +0000 UTC" firstStartedPulling="2026-01-27 08:03:16.495977274 +0000 UTC m=+2809.091599800" lastFinishedPulling="2026-01-27 08:03:28.753791289 +0000 UTC m=+2821.349413815" observedRunningTime="2026-01-27 08:03:29.922096871 +0000 UTC m=+2822.517719397" watchObservedRunningTime="2026-01-27 08:03:29.929138261 +0000 UTC m=+2822.524760777" Jan 27 08:03:49 crc kubenswrapper[4764]: I0127 08:03:49.054176 4764 generic.go:334] "Generic (PLEG): container finished" podID="63a92ca7-f30b-4ba1-a922-90f1afb31e83" containerID="e4c6faedca05e667cd8c0a2979de6f44dc945bc1fdfd87700fcad0cfb575590f" exitCode=0 Jan 27 08:03:49 crc kubenswrapper[4764]: I0127 08:03:49.054269 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" event={"ID":"63a92ca7-f30b-4ba1-a922-90f1afb31e83","Type":"ContainerDied","Data":"e4c6faedca05e667cd8c0a2979de6f44dc945bc1fdfd87700fcad0cfb575590f"} Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.201557 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.232157 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-sl54b"] Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.240057 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-sl54b"] Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.329374 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host\") pod \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.329526 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host" (OuterVolumeSpecName: "host") pod "63a92ca7-f30b-4ba1-a922-90f1afb31e83" (UID: "63a92ca7-f30b-4ba1-a922-90f1afb31e83"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.329732 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsqbs\" (UniqueName: \"kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs\") pod \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\" (UID: \"63a92ca7-f30b-4ba1-a922-90f1afb31e83\") " Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.330121 4764 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63a92ca7-f30b-4ba1-a922-90f1afb31e83-host\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.335885 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs" (OuterVolumeSpecName: "kube-api-access-xsqbs") pod "63a92ca7-f30b-4ba1-a922-90f1afb31e83" (UID: "63a92ca7-f30b-4ba1-a922-90f1afb31e83"). InnerVolumeSpecName "kube-api-access-xsqbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.432508 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsqbs\" (UniqueName: \"kubernetes.io/projected/63a92ca7-f30b-4ba1-a922-90f1afb31e83-kube-api-access-xsqbs\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:50 crc kubenswrapper[4764]: I0127 08:03:50.450885 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a92ca7-f30b-4ba1-a922-90f1afb31e83" path="/var/lib/kubelet/pods/63a92ca7-f30b-4ba1-a922-90f1afb31e83/volumes" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.085234 4764 scope.go:117] "RemoveContainer" containerID="e4c6faedca05e667cd8c0a2979de6f44dc945bc1fdfd87700fcad0cfb575590f" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.085283 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-sl54b" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.430772 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-nqnzs"] Jan 27 08:03:51 crc kubenswrapper[4764]: E0127 08:03:51.431327 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a92ca7-f30b-4ba1-a922-90f1afb31e83" containerName="container-00" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.431340 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a92ca7-f30b-4ba1-a922-90f1afb31e83" containerName="container-00" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.431532 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a92ca7-f30b-4ba1-a922-90f1afb31e83" containerName="container-00" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.432075 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.551174 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.551692 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnvm\" (UniqueName: \"kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.653396 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsnvm\" (UniqueName: \"kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.653495 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.653604 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.680139 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsnvm\" (UniqueName: \"kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm\") pod \"crc-debug-nqnzs\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:51 crc kubenswrapper[4764]: I0127 08:03:51.746828 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:52 crc kubenswrapper[4764]: I0127 08:03:52.095792 4764 generic.go:334] "Generic (PLEG): container finished" podID="45a307a3-8ad1-42db-b308-f39a42d67c5f" containerID="3755fb27ac6be1a6001198d56f4dd54deb5b4dd64ad774fe508fcb696dce0453" exitCode=1 Jan 27 08:03:52 crc kubenswrapper[4764]: I0127 08:03:52.095985 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" event={"ID":"45a307a3-8ad1-42db-b308-f39a42d67c5f","Type":"ContainerDied","Data":"3755fb27ac6be1a6001198d56f4dd54deb5b4dd64ad774fe508fcb696dce0453"} Jan 27 08:03:52 crc kubenswrapper[4764]: I0127 08:03:52.096245 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" event={"ID":"45a307a3-8ad1-42db-b308-f39a42d67c5f","Type":"ContainerStarted","Data":"20feee936ebabaef5b79c740600c8b21409094cb07531039ba585a5c67cc399f"} Jan 27 08:03:52 crc kubenswrapper[4764]: I0127 08:03:52.130395 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-nqnzs"] Jan 27 08:03:52 crc kubenswrapper[4764]: I0127 08:03:52.138466 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kxdq4/crc-debug-nqnzs"] Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.209981 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.282192 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host\") pod \"45a307a3-8ad1-42db-b308-f39a42d67c5f\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.282366 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host" (OuterVolumeSpecName: "host") pod "45a307a3-8ad1-42db-b308-f39a42d67c5f" (UID: "45a307a3-8ad1-42db-b308-f39a42d67c5f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.282418 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsnvm\" (UniqueName: \"kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm\") pod \"45a307a3-8ad1-42db-b308-f39a42d67c5f\" (UID: \"45a307a3-8ad1-42db-b308-f39a42d67c5f\") " Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.283069 4764 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a307a3-8ad1-42db-b308-f39a42d67c5f-host\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.288380 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm" (OuterVolumeSpecName: "kube-api-access-hsnvm") pod "45a307a3-8ad1-42db-b308-f39a42d67c5f" (UID: "45a307a3-8ad1-42db-b308-f39a42d67c5f"). InnerVolumeSpecName "kube-api-access-hsnvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.385086 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsnvm\" (UniqueName: \"kubernetes.io/projected/45a307a3-8ad1-42db-b308-f39a42d67c5f-kube-api-access-hsnvm\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.762467 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:03:53 crc kubenswrapper[4764]: I0127 08:03:53.762542 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:03:54 crc kubenswrapper[4764]: I0127 08:03:54.115252 4764 scope.go:117] "RemoveContainer" containerID="3755fb27ac6be1a6001198d56f4dd54deb5b4dd64ad774fe508fcb696dce0453" Jan 27 08:03:54 crc kubenswrapper[4764]: I0127 08:03:54.115266 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/crc-debug-nqnzs" Jan 27 08:03:54 crc kubenswrapper[4764]: I0127 08:03:54.449987 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a307a3-8ad1-42db-b308-f39a42d67c5f" path="/var/lib/kubelet/pods/45a307a3-8ad1-42db-b308-f39a42d67c5f/volumes" Jan 27 08:04:23 crc kubenswrapper[4764]: I0127 08:04:23.762134 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:04:23 crc kubenswrapper[4764]: I0127 08:04:23.763196 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.032932 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdbf9956d-hxvm6_bc9a7b7d-fe4a-4120-a0db-f757fba16ccf/barbican-api/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.218112 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdbf9956d-hxvm6_bc9a7b7d-fe4a-4120-a0db-f757fba16ccf/barbican-api-log/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.257801 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8657d6789d-n6r2p_a45db52d-2a92-4743-9a8b-12e623299cd5/barbican-keystone-listener/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.343890 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8657d6789d-n6r2p_a45db52d-2a92-4743-9a8b-12e623299cd5/barbican-keystone-listener-log/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.467206 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f9c8d5-vhpg6_b0cca1ae-6ef7-421a-b481-c4251ff65668/barbican-worker/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.480260 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f9c8d5-vhpg6_b0cca1ae-6ef7-421a-b481-c4251ff65668/barbican-worker-log/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.660997 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xqrdv_6f8df847-d027-42ca-a466-a8ccd60b9428/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.721431 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce77f72a-8ed8-4216-b443-a1c5737a50e7/ceilometer-central-agent/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.822994 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce77f72a-8ed8-4216-b443-a1c5737a50e7/ceilometer-notification-agent/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.890185 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce77f72a-8ed8-4216-b443-a1c5737a50e7/proxy-httpd/0.log" Jan 27 08:04:30 crc kubenswrapper[4764]: I0127 08:04:30.913954 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce77f72a-8ed8-4216-b443-a1c5737a50e7/sg-core/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.066322 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_509e5ab7-b8a4-43a3-8622-e76d16374941/cinder-api/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.089262 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_509e5ab7-b8a4-43a3-8622-e76d16374941/cinder-api-log/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.219905 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_311b6d25-72b1-40db-911d-78426da15c6b/cinder-scheduler/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.272882 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_311b6d25-72b1-40db-911d-78426da15c6b/probe/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.395065 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vfwgl_6eb419d4-1c38-4da9-95b7-0dd6ce308bdb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.533352 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-j55zp_430b1bd3-ef93-47e7-a02b-df097c4b44d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.581264 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-s8lpl_4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b/init/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.793476 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-s8lpl_4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b/init/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.858693 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-s8lpl_4dc32db6-6ef0-4a6b-b6bc-c8315ec7748b/dnsmasq-dns/0.log" Jan 27 08:04:31 crc kubenswrapper[4764]: I0127 08:04:31.883982 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sxjgv_8b37bfd5-b31b-489d-a973-ffeeb769660c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.078277 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e9a8eb4-9ded-40d9-91c2-824abdc80016/glance-httpd/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.094779 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e9a8eb4-9ded-40d9-91c2-824abdc80016/glance-log/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.264052 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_39745950-025a-428e-bd01-03d3d0d5050b/glance-httpd/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.270270 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_39745950-025a-428e-bd01-03d3d0d5050b/glance-log/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.405805 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5ff9bfcff8-v9nrc_4acbb02d-c98c-4b45-bc09-dd13fe383502/horizon/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.621106 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l9qlp_faa09b18-d734-422b-8dcc-ff3aad34a549/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.722581 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5ff9bfcff8-v9nrc_4acbb02d-c98c-4b45-bc09-dd13fe383502/horizon-log/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.795967 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4kt4p_302747f7-58c1-4c8d-8e21-e713bb849750/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:32 crc kubenswrapper[4764]: I0127 08:04:32.960818 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c7589cf-vmvrw_b7ede279-fbc7-439d-9b05-95bb8705cbbb/keystone-api/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.024694 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29491681-5cn85_8c44c98d-fecd-446f-99b1-bef1034340b6/keystone-cron/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.193056 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8e1fb0ac-7115-4548-b084-a0b800ad68a8/kube-state-metrics/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.264866 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4js7l_0bb121c0-ea4c-49b6-b818-e36e6b657d53/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.545379 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68dd46b99f-78lf2_5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4/neutron-api/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.652807 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68dd46b99f-78lf2_5f0bb3d9-228a-4f07-a8c4-5c6ca0ab20d4/neutron-httpd/0.log" Jan 27 08:04:33 crc kubenswrapper[4764]: I0127 08:04:33.761070 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-drl5k_c5cba19d-4bf2-4dba-b8e3-43594cec3acb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.237958 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d3d920a5-7d67-483d-9150-fd6a434a3def/nova-api-log/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.323355 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d3d920a5-7d67-483d-9150-fd6a434a3def/nova-api-api/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.610700 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a9756bce-266e-4397-875e-8be9c3f383f9/nova-cell1-conductor-conductor/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.615013 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b9a73b99-e667-4d9e-81ca-587d8c3a73b6/nova-cell0-conductor-conductor/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.916165 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7d674cb9-7f4b-4557-a739-dc3c4aba7bdb/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 08:04:34 crc kubenswrapper[4764]: I0127 08:04:34.941549 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qfnxn_348551f5-8fe7-4ca4-a294-ef2587ea3928/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.231381 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd/nova-metadata-log/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.334953 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_887d1985-52c3-476c-bd70-e82a17852b13/nova-scheduler-scheduler/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.483045 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ddcf25f7-570e-4d97-9109-9331ba1286a0/mysql-bootstrap/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.635875 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ddcf25f7-570e-4d97-9109-9331ba1286a0/mysql-bootstrap/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.672916 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ddcf25f7-570e-4d97-9109-9331ba1286a0/galera/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.880272 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffba56e0-1785-4bd3-8ed4-a2fdc0cdcfdd/nova-metadata-metadata/0.log" Jan 27 08:04:35 crc kubenswrapper[4764]: I0127 08:04:35.898207 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_767ab4c4-b54f-448f-af5a-b4d07b433023/mysql-bootstrap/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.118376 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3bd2fe19-e10d-4784-9823-ad215851bc5a/openstackclient/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.137956 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_767ab4c4-b54f-448f-af5a-b4d07b433023/galera/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.150937 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_767ab4c4-b54f-448f-af5a-b4d07b433023/mysql-bootstrap/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.422979 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:36 crc kubenswrapper[4764]: E0127 08:04:36.424325 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a307a3-8ad1-42db-b308-f39a42d67c5f" containerName="container-00" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.424408 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a307a3-8ad1-42db-b308-f39a42d67c5f" containerName="container-00" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.424661 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a307a3-8ad1-42db-b308-f39a42d67c5f" containerName="container-00" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.425923 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.461390 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.507972 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hl87j_3fc8ca71-7042-4f29-8c78-d7f1974c55c2/openstack-network-exporter/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.628718 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-75gxq_d2d1dfce-f31e-412a-af93-ad96fa2f3650/ovsdb-server-init/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.645639 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.645755 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.645790 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svpxj\" (UniqueName: \"kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.746924 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.747007 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.747030 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svpxj\" (UniqueName: \"kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.747473 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.747567 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.768339 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svpxj\" (UniqueName: \"kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj\") pod \"redhat-marketplace-kvgxb\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.780765 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-75gxq_d2d1dfce-f31e-412a-af93-ad96fa2f3650/ovs-vswitchd/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.796587 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-75gxq_d2d1dfce-f31e-412a-af93-ad96fa2f3650/ovsdb-server-init/0.log" Jan 27 08:04:36 crc kubenswrapper[4764]: I0127 08:04:36.886231 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-75gxq_d2d1dfce-f31e-412a-af93-ad96fa2f3650/ovsdb-server/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.054804 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rw2w4_cad0e5a9-459c-4f9b-865b-ddc533316170/ovn-controller/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.067962 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.186787 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ndkg7_c26586b8-9b42-42de-9b7d-4b8081ee2a67/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.331067 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_febffc0b-3eb0-4183-993e-e12bb3e11744/openstack-network-exporter/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.406005 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_febffc0b-3eb0-4183-993e-e12bb3e11744/ovn-northd/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.556394 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a/ovsdbserver-nb/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.572208 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_72fcd1e5-ff0d-4f3f-a6ad-84f1fde2996a/openstack-network-exporter/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.575858 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.876685 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1aef55ab-9f36-4eb1-8556-27e2136d1725/openstack-network-exporter/0.log" Jan 27 08:04:37 crc kubenswrapper[4764]: I0127 08:04:37.890047 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1aef55ab-9f36-4eb1-8556-27e2136d1725/ovsdbserver-sb/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.143269 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5547d9cbf4-x8lh6_a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e/placement-log/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.150028 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5547d9cbf4-x8lh6_a6b2e17e-3e8f-4ed5-b15d-aba3fb54713e/placement-api/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.196633 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c1fbc57c-38ee-49be-bee5-4b04c5ef3211/setup-container/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.393812 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c1fbc57c-38ee-49be-bee5-4b04c5ef3211/setup-container/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.520746 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95dd078e-042c-48b3-aa1c-f8f801d66ae0/setup-container/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.534135 4764 generic.go:334] "Generic (PLEG): container finished" podID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerID="8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31" exitCode=0 Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.534187 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerDied","Data":"8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31"} Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.534219 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerStarted","Data":"dd29d6fbc5473f7b970a3df442b9ae9139cbb7d2d86ce50fcebff913afca4e37"} Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.535388 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c1fbc57c-38ee-49be-bee5-4b04c5ef3211/rabbitmq/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.686724 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95dd078e-042c-48b3-aa1c-f8f801d66ae0/setup-container/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.792219 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-x6vnh_cae6078e-ba1a-4ada-89be-3d6b35993b05/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:38 crc kubenswrapper[4764]: I0127 08:04:38.807329 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95dd078e-042c-48b3-aa1c-f8f801d66ae0/rabbitmq/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.066279 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d6hnb_42c07d3a-5dda-4260-81a0-af6bb112ea25/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.088987 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-t25pt_ff81f29d-57aa-4263-8554-6f4d4318bdd4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.553851 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8rnkw_d210edad-b0d1-4060-8c14-bb8f137338c7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.564943 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pnbrj_b65e5efc-eb4e-4e3f-b7b3-903df7ed52b7/ssh-known-hosts-edpm-deployment/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.792275 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-db5d878f-pf9rf_69437692-e8cb-4991-a2de-1434f68c7201/proxy-server/0.log" Jan 27 08:04:39 crc kubenswrapper[4764]: I0127 08:04:39.866990 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-db5d878f-pf9rf_69437692-e8cb-4991-a2de-1434f68c7201/proxy-httpd/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.019165 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5fdxs_5bc23877-3f5d-40bd-a1ee-4589e777beec/swift-ring-rebalance/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.124491 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/account-auditor/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.162138 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/account-reaper/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.332234 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/account-server/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.356716 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/account-replicator/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.405666 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/container-auditor/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.422057 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/container-replicator/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.568972 4764 generic.go:334] "Generic (PLEG): container finished" podID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerID="9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200" exitCode=0 Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.569023 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerDied","Data":"9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200"} Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.586878 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/container-server/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.596099 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/container-updater/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.641936 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/object-auditor/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.681724 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/object-expirer/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.800934 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/object-replicator/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.860085 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/object-server/0.log" Jan 27 08:04:40 crc kubenswrapper[4764]: I0127 08:04:40.923079 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/object-updater/0.log" Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.003001 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/rsync/0.log" Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.013622 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f481ed8-7f32-478b-88ce-6caaa3a42074/swift-recon-cron/0.log" Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.330322 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-s4sjb_405eb05b-d23b-4ef5-b1bf-617c22a27767/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.356266 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7gnf4_9e594ddd-3885-411e-8728-488126ab67b2/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.579984 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerStarted","Data":"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd"} Jan 27 08:04:41 crc kubenswrapper[4764]: I0127 08:04:41.613427 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvgxb" podStartSLOduration=3.149988468 podStartE2EDuration="5.61339972s" podCreationTimestamp="2026-01-27 08:04:36 +0000 UTC" firstStartedPulling="2026-01-27 08:04:38.537039663 +0000 UTC m=+2891.132662189" lastFinishedPulling="2026-01-27 08:04:41.000450905 +0000 UTC m=+2893.596073441" observedRunningTime="2026-01-27 08:04:41.604839138 +0000 UTC m=+2894.200461664" watchObservedRunningTime="2026-01-27 08:04:41.61339972 +0000 UTC m=+2894.209022246" Jan 27 08:04:44 crc kubenswrapper[4764]: I0127 08:04:44.652396 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9689ecb1-cfaf-4f78-aa32-ca09875bfe4f/memcached/0.log" Jan 27 08:04:47 crc kubenswrapper[4764]: I0127 08:04:47.069664 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:47 crc kubenswrapper[4764]: I0127 08:04:47.070240 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:47 crc kubenswrapper[4764]: I0127 08:04:47.124204 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:47 crc kubenswrapper[4764]: I0127 08:04:47.677292 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:47 crc kubenswrapper[4764]: I0127 08:04:47.729988 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.645111 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kvgxb" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="registry-server" containerID="cri-o://01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd" gracePeriod=2 Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.766667 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.770454 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.787029 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.901990 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.902371 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:49 crc kubenswrapper[4764]: I0127 08:04:49.902473 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzq4c\" (UniqueName: \"kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.010271 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.010326 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.010410 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzq4c\" (UniqueName: \"kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.011371 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.011410 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.035543 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzq4c\" (UniqueName: \"kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c\") pod \"certified-operators-fxwxb\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.146292 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.249613 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.315499 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svpxj\" (UniqueName: \"kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj\") pod \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.315922 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities\") pod \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.316047 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content\") pod \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\" (UID: \"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae\") " Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.325498 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities" (OuterVolumeSpecName: "utilities") pod "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" (UID: "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.333090 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj" (OuterVolumeSpecName: "kube-api-access-svpxj") pod "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" (UID: "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae"). InnerVolumeSpecName "kube-api-access-svpxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.367663 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" (UID: "2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.418534 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svpxj\" (UniqueName: \"kubernetes.io/projected/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-kube-api-access-svpxj\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.418568 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.418578 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.655094 4764 generic.go:334] "Generic (PLEG): container finished" podID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerID="01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd" exitCode=0 Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.655135 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerDied","Data":"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd"} Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.655161 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvgxb" event={"ID":"2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae","Type":"ContainerDied","Data":"dd29d6fbc5473f7b970a3df442b9ae9139cbb7d2d86ce50fcebff913afca4e37"} Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.655178 4764 scope.go:117] "RemoveContainer" containerID="01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.655298 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvgxb" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.680547 4764 scope.go:117] "RemoveContainer" containerID="9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.690100 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.698427 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvgxb"] Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.710391 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.742928 4764 scope.go:117] "RemoveContainer" containerID="8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.869823 4764 scope.go:117] "RemoveContainer" containerID="01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd" Jan 27 08:04:50 crc kubenswrapper[4764]: E0127 08:04:50.879198 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd\": container with ID starting with 01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd not found: ID does not exist" containerID="01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.879255 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd"} err="failed to get container status \"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd\": rpc error: code = NotFound desc = could not find container \"01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd\": container with ID starting with 01e9e23d047f62c1e0854df7c851578fa80e7321b2a9b1fe42f60a269cfb2bfd not found: ID does not exist" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.879287 4764 scope.go:117] "RemoveContainer" containerID="9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200" Jan 27 08:04:50 crc kubenswrapper[4764]: E0127 08:04:50.880920 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200\": container with ID starting with 9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200 not found: ID does not exist" containerID="9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.880949 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200"} err="failed to get container status \"9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200\": rpc error: code = NotFound desc = could not find container \"9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200\": container with ID starting with 9346b3919b651b9cebd825585daae726bf842b8097f431e8bf47fcefaa684200 not found: ID does not exist" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.880969 4764 scope.go:117] "RemoveContainer" containerID="8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31" Jan 27 08:04:50 crc kubenswrapper[4764]: E0127 08:04:50.887214 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31\": container with ID starting with 8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31 not found: ID does not exist" containerID="8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31" Jan 27 08:04:50 crc kubenswrapper[4764]: I0127 08:04:50.887491 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31"} err="failed to get container status \"8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31\": rpc error: code = NotFound desc = could not find container \"8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31\": container with ID starting with 8ef10e07efca5347f647261fac4907d1f9a5271d01f8d9b94d292c956eb1da31 not found: ID does not exist" Jan 27 08:04:51 crc kubenswrapper[4764]: I0127 08:04:51.666990 4764 generic.go:334] "Generic (PLEG): container finished" podID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerID="5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444" exitCode=0 Jan 27 08:04:51 crc kubenswrapper[4764]: I0127 08:04:51.667277 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerDied","Data":"5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444"} Jan 27 08:04:51 crc kubenswrapper[4764]: I0127 08:04:51.667306 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerStarted","Data":"89b1077663d0f85c39fb2726df9ee96755bb7bc2a962cb80224ffce65852965b"} Jan 27 08:04:52 crc kubenswrapper[4764]: I0127 08:04:52.451369 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" path="/var/lib/kubelet/pods/2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae/volumes" Jan 27 08:04:52 crc kubenswrapper[4764]: I0127 08:04:52.677501 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerStarted","Data":"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a"} Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.688483 4764 generic.go:334] "Generic (PLEG): container finished" podID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerID="0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a" exitCode=0 Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.688526 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerDied","Data":"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a"} Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.762703 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.763036 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.763081 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.763827 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:04:53 crc kubenswrapper[4764]: I0127 08:04:53.763876 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7" gracePeriod=600 Jan 27 08:04:54 crc kubenswrapper[4764]: I0127 08:04:54.705060 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7" exitCode=0 Jan 27 08:04:54 crc kubenswrapper[4764]: I0127 08:04:54.705222 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7"} Jan 27 08:04:54 crc kubenswrapper[4764]: I0127 08:04:54.705379 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerStarted","Data":"d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6"} Jan 27 08:04:54 crc kubenswrapper[4764]: I0127 08:04:54.705405 4764 scope.go:117] "RemoveContainer" containerID="a6b73242e80d579dd52b9d5507f33a6c3e497d8526bdd321dae5d44cbd5bfc1a" Jan 27 08:04:55 crc kubenswrapper[4764]: I0127 08:04:55.715849 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerStarted","Data":"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348"} Jan 27 08:04:55 crc kubenswrapper[4764]: I0127 08:04:55.761674 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxwxb" podStartSLOduration=3.937534167 podStartE2EDuration="6.761654642s" podCreationTimestamp="2026-01-27 08:04:49 +0000 UTC" firstStartedPulling="2026-01-27 08:04:51.670062214 +0000 UTC m=+2904.265684740" lastFinishedPulling="2026-01-27 08:04:54.494182689 +0000 UTC m=+2907.089805215" observedRunningTime="2026-01-27 08:04:55.754692444 +0000 UTC m=+2908.350315010" watchObservedRunningTime="2026-01-27 08:04:55.761654642 +0000 UTC m=+2908.357277168" Jan 27 08:05:00 crc kubenswrapper[4764]: I0127 08:05:00.146966 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:00 crc kubenswrapper[4764]: I0127 08:05:00.147614 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:00 crc kubenswrapper[4764]: I0127 08:05:00.199723 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:00 crc kubenswrapper[4764]: I0127 08:05:00.809986 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:00 crc kubenswrapper[4764]: I0127 08:05:00.868417 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:05:02 crc kubenswrapper[4764]: I0127 08:05:02.778915 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxwxb" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="registry-server" containerID="cri-o://0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348" gracePeriod=2 Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.253223 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.406999 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content\") pod \"a973c560-12e4-4a4f-98a2-8c061979a1b0\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.407562 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzq4c\" (UniqueName: \"kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c\") pod \"a973c560-12e4-4a4f-98a2-8c061979a1b0\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.407641 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities\") pod \"a973c560-12e4-4a4f-98a2-8c061979a1b0\" (UID: \"a973c560-12e4-4a4f-98a2-8c061979a1b0\") " Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.408688 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities" (OuterVolumeSpecName: "utilities") pod "a973c560-12e4-4a4f-98a2-8c061979a1b0" (UID: "a973c560-12e4-4a4f-98a2-8c061979a1b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.415517 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c" (OuterVolumeSpecName: "kube-api-access-lzq4c") pod "a973c560-12e4-4a4f-98a2-8c061979a1b0" (UID: "a973c560-12e4-4a4f-98a2-8c061979a1b0"). InnerVolumeSpecName "kube-api-access-lzq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.460774 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a973c560-12e4-4a4f-98a2-8c061979a1b0" (UID: "a973c560-12e4-4a4f-98a2-8c061979a1b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.510882 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.510937 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzq4c\" (UniqueName: \"kubernetes.io/projected/a973c560-12e4-4a4f-98a2-8c061979a1b0-kube-api-access-lzq4c\") on node \"crc\" DevicePath \"\"" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.510953 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a973c560-12e4-4a4f-98a2-8c061979a1b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.789317 4764 generic.go:334] "Generic (PLEG): container finished" podID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerID="0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348" exitCode=0 Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.789362 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerDied","Data":"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348"} Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.789407 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxwxb" event={"ID":"a973c560-12e4-4a4f-98a2-8c061979a1b0","Type":"ContainerDied","Data":"89b1077663d0f85c39fb2726df9ee96755bb7bc2a962cb80224ffce65852965b"} Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.789427 4764 scope.go:117] "RemoveContainer" containerID="0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.789603 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxwxb" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.825925 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.827261 4764 scope.go:117] "RemoveContainer" containerID="0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.836853 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxwxb"] Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.860557 4764 scope.go:117] "RemoveContainer" containerID="5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.902878 4764 scope.go:117] "RemoveContainer" containerID="0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348" Jan 27 08:05:03 crc kubenswrapper[4764]: E0127 08:05:03.903403 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348\": container with ID starting with 0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348 not found: ID does not exist" containerID="0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.903559 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348"} err="failed to get container status \"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348\": rpc error: code = NotFound desc = could not find container \"0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348\": container with ID starting with 0bf03ba928e9b96620251043ad5fc9c8c41ad99a62baed6a806a3a5d6b5c3348 not found: ID does not exist" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.903655 4764 scope.go:117] "RemoveContainer" containerID="0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a" Jan 27 08:05:03 crc kubenswrapper[4764]: E0127 08:05:03.904149 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a\": container with ID starting with 0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a not found: ID does not exist" containerID="0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.904179 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a"} err="failed to get container status \"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a\": rpc error: code = NotFound desc = could not find container \"0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a\": container with ID starting with 0b8c38604ff631da6b4c5ba987641e030b6c72dc9701b15e97528c6c12c0dc4a not found: ID does not exist" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.904197 4764 scope.go:117] "RemoveContainer" containerID="5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444" Jan 27 08:05:03 crc kubenswrapper[4764]: E0127 08:05:03.904742 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444\": container with ID starting with 5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444 not found: ID does not exist" containerID="5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444" Jan 27 08:05:03 crc kubenswrapper[4764]: I0127 08:05:03.904848 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444"} err="failed to get container status \"5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444\": rpc error: code = NotFound desc = could not find container \"5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444\": container with ID starting with 5d433dd5a641cb776f5fb1c39540a074d8d8e6d9dd69cbf11e5e2c2f7dc89444 not found: ID does not exist" Jan 27 08:05:04 crc kubenswrapper[4764]: I0127 08:05:04.451383 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" path="/var/lib/kubelet/pods/a973c560-12e4-4a4f-98a2-8c061979a1b0/volumes" Jan 27 08:05:07 crc kubenswrapper[4764]: I0127 08:05:07.727819 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/util/0.log" Jan 27 08:05:07 crc kubenswrapper[4764]: I0127 08:05:07.979739 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/util/0.log" Jan 27 08:05:07 crc kubenswrapper[4764]: I0127 08:05:07.994855 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/pull/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.048197 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/pull/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.196872 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/pull/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.214860 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/util/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.293750 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b8592b4bee85f54b7cb0d13a293b9a39b43f007684a97ff40ca91db6cfw5p57_c3f27bb3-8196-402a-a147-11074280c9d6/extract/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.599275 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5fdc687f5-cxmrb_14776870-1ec8-423a-a486-ac576b83cb99/manager/0.log" Jan 27 08:05:08 crc kubenswrapper[4764]: I0127 08:05:08.686801 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-76d4d5b8f9-r76hr_7f9c84bb-2150-49ce-9002-2719d491b2d9/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.033427 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84d5bb46b-bh2ct_1328327d-b57d-4072-86de-039c4642a1f8/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.126825 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-hlkrc_6986731a-1bd6-4bfe-a196-7d9be4e9e6f8/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.280336 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-nrlkg_65cc2cb9-2b52-4597-b5b5-0ca087d2f306/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.633449 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58865f87b4-wpkhc_e5c21e1e-f5ae-4f87-8789-2638c0b4dea1/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.847775 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-27ncl_68e65a71-8e22-4256-81eb-cd9a58927e5a/manager/0.log" Jan 27 08:05:09 crc kubenswrapper[4764]: I0127 08:05:09.963417 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78f8b7b89c-ptrgx_2b6a69f3-cf3b-465a-917c-78cf3248eb58/manager/0.log" Jan 27 08:05:10 crc kubenswrapper[4764]: I0127 08:05:10.070335 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78b8f8fd84-8ffvc_be2b755e-7957-421a-be94-398366a49522/manager/0.log" Jan 27 08:05:10 crc kubenswrapper[4764]: I0127 08:05:10.568640 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-65dhc_6ae3b798-c9a5-48a9-8608-af33f26cb323/manager/0.log" Jan 27 08:05:10 crc kubenswrapper[4764]: I0127 08:05:10.759386 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-569695f6c5-87prv_85693d04-9de6-4da3-a527-c6d84ff033b2/manager/0.log" Jan 27 08:05:10 crc kubenswrapper[4764]: I0127 08:05:10.919378 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74ffd97575-rhpc8_93d08886-452d-4408-a924-c9e572c8b2f0/manager/0.log" Jan 27 08:05:11 crc kubenswrapper[4764]: I0127 08:05:11.230183 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7bd95ffd6dldjlk_2b3e29bf-af7b-4575-a91b-042b85a244c9/manager/0.log" Jan 27 08:05:11 crc kubenswrapper[4764]: I0127 08:05:11.717206 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bfcf7b875-t6krg_08a061bb-1d9c-4d54-a894-e8352394b3a1/operator/0.log" Jan 27 08:05:12 crc kubenswrapper[4764]: I0127 08:05:12.333274 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tx6xn_1c35ba5a-68f5-474a-925e-f7580994a34c/registry-server/0.log" Jan 27 08:05:12 crc kubenswrapper[4764]: I0127 08:05:12.639189 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-x6k2g_f72762ca-57ca-4151-aa29-f2a7db4be1f0/manager/0.log" Jan 27 08:05:12 crc kubenswrapper[4764]: I0127 08:05:12.788243 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bf4858b78-xr5lf_89a9b88a-dcc3-462f-a5f2-1311113a92ca/manager/0.log" Jan 27 08:05:12 crc kubenswrapper[4764]: I0127 08:05:12.817175 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7748d79f84-xvlnc_683bcc0e-1607-496b-8d4b-195a1eb2bbaa/manager/0.log" Jan 27 08:05:13 crc kubenswrapper[4764]: I0127 08:05:13.096055 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6b8j8_45b5dc0d-656e-4475-9414-ac8f1e6ae767/operator/0.log" Jan 27 08:05:13 crc kubenswrapper[4764]: I0127 08:05:13.217806 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-65596dbf77-h5nkr_6f80a85c-409f-4e68-ab1d-8ee9bb19e544/manager/0.log" Jan 27 08:05:13 crc kubenswrapper[4764]: I0127 08:05:13.433084 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7db57dc8bf-r9k82_c8b21737-d0e6-447a-b230-50e2aed06fd1/manager/0.log" Jan 27 08:05:13 crc kubenswrapper[4764]: I0127 08:05:13.505815 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-lcd48_87551348-318f-45f9-bea6-07750f5c0b7b/manager/0.log" Jan 27 08:05:13 crc kubenswrapper[4764]: I0127 08:05:13.723818 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6476466c7c-2hntl_01275567-9bc4-4728-98af-c399b3b386f3/manager/0.log" Jan 27 08:05:14 crc kubenswrapper[4764]: I0127 08:05:14.103715 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76958f4d87-pt4l2_ce339e1f-d181-42e1-bb9a-d6401699560f/manager/0.log" Jan 27 08:05:15 crc kubenswrapper[4764]: I0127 08:05:15.391037 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75b8f798ff-nl98d_11dbcaab-8ae4-454f-bc9a-5082597154b2/manager/0.log" Jan 27 08:05:33 crc kubenswrapper[4764]: I0127 08:05:33.614799 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nknl2_faf1a2aa-14d3-4870-9886-3c0c989ed0e0/control-plane-machine-set-operator/0.log" Jan 27 08:05:33 crc kubenswrapper[4764]: I0127 08:05:33.770822 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4n9lw_773a03ba-4a88-45c8-99f2-3fcc582e31a0/kube-rbac-proxy/0.log" Jan 27 08:05:33 crc kubenswrapper[4764]: I0127 08:05:33.807954 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4n9lw_773a03ba-4a88-45c8-99f2-3fcc582e31a0/machine-api-operator/0.log" Jan 27 08:05:46 crc kubenswrapper[4764]: I0127 08:05:46.138706 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2j2dp_5da1d510-00c8-417b-9e20-8d85290affac/cert-manager-controller/0.log" Jan 27 08:05:46 crc kubenswrapper[4764]: I0127 08:05:46.303453 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5np2p_e2e23dd2-6c48-4820-9c25-530e99756477/cert-manager-cainjector/0.log" Jan 27 08:05:46 crc kubenswrapper[4764]: I0127 08:05:46.362500 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-554kw_928f9ba5-2664-4c52-8c0e-786eedf18952/cert-manager-webhook/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.368935 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nvhss_217f72e1-69f3-4204-9240-c04499e62f42/nmstate-console-plugin/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.598545 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sqp2c_d3b5797e-4e42-4b80-bb38-f9672697cc0b/nmstate-handler/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.612843 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj8zb_f7a83740-7cd1-4527-a649-f1c90cf6b280/kube-rbac-proxy/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.704259 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rj8zb_f7a83740-7cd1-4527-a649-f1c90cf6b280/nmstate-metrics/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.793692 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dlghs_c5651ef7-a219-43f4-b015-d509be4e1e3f/nmstate-operator/0.log" Jan 27 08:06:00 crc kubenswrapper[4764]: I0127 08:06:00.912981 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f8gcn_155fa58c-a112-4cf7-b994-65b5efd97dc6/nmstate-webhook/0.log" Jan 27 08:06:26 crc kubenswrapper[4764]: I0127 08:06:26.467036 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dz79t_3e26bcb4-4093-406d-a87c-05f3873ec3f7/kube-rbac-proxy/0.log" Jan 27 08:06:26 crc kubenswrapper[4764]: I0127 08:06:26.540749 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dz79t_3e26bcb4-4093-406d-a87c-05f3873ec3f7/controller/0.log" Jan 27 08:06:26 crc kubenswrapper[4764]: I0127 08:06:26.703046 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4ktfb_26672501-4cd3-4eb2-b893-46badfedbd56/frr-k8s-webhook-server/0.log" Jan 27 08:06:26 crc kubenswrapper[4764]: I0127 08:06:26.795260 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-frr-files/0.log" Jan 27 08:06:26 crc kubenswrapper[4764]: I0127 08:06:26.993830 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-metrics/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.000858 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-reloader/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.031056 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-reloader/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.039690 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-frr-files/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.215682 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-reloader/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.235340 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-metrics/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.255230 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-frr-files/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.256751 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-metrics/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.427843 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-metrics/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.439282 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-reloader/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.442662 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/cp-frr-files/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.487059 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/controller/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.690988 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/kube-rbac-proxy/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.707844 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/frr-metrics/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.744959 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/kube-rbac-proxy-frr/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.945730 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/reloader/0.log" Jan 27 08:06:27 crc kubenswrapper[4764]: I0127 08:06:27.999640 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-748c4765c-2x4vk_01f5c524-c371-4487-a1cc-619a76dba209/manager/0.log" Jan 27 08:06:28 crc kubenswrapper[4764]: I0127 08:06:28.274765 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-78b8897d5b-w8jsn_6dcc9557-47dd-412a-931a-e51cae97b1eb/webhook-server/0.log" Jan 27 08:06:28 crc kubenswrapper[4764]: I0127 08:06:28.566590 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-slnqc_4e2f3c15-fd12-4b82-a070-776cce0272b1/kube-rbac-proxy/0.log" Jan 27 08:06:29 crc kubenswrapper[4764]: I0127 08:06:29.144456 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-slnqc_4e2f3c15-fd12-4b82-a070-776cce0272b1/speaker/0.log" Jan 27 08:06:29 crc kubenswrapper[4764]: I0127 08:06:29.324462 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wvhvc_2160d959-833f-4b0d-bfbf-07f8884bbe35/frr/0.log" Jan 27 08:06:40 crc kubenswrapper[4764]: I0127 08:06:40.986542 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/util/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.202722 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/util/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.231265 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.257362 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.419498 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/util/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.444458 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/extract/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.448338 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6qxfr_a4266b92-4c6a-4651-8b75-a7e6479e5aff/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.620533 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/util/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.767976 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.774912 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/util/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.789542 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.956102 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/extract/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.995214 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/pull/0.log" Jan 27 08:06:41 crc kubenswrapper[4764]: I0127 08:06:41.998333 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137rgnr_c5dce277-e909-47e5-bbae-57b47e19613b/util/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.136315 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-utilities/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.318058 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-content/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.318057 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-utilities/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.335628 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-content/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.499018 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-utilities/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.532791 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/extract-content/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.680607 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-utilities/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.979826 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-content/0.log" Jan 27 08:06:42 crc kubenswrapper[4764]: I0127 08:06:42.982752 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-content/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.013398 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-utilities/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.076752 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2lsf_2219dd21-2e26-4ed0-b937-d28596919965/registry-server/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.214259 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-utilities/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.238048 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/extract-content/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.547722 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k6bmt_bad48f2b-ed0c-4320-b601-5851008a6ae3/marketplace-operator/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.571926 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-utilities/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.625920 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grw8j_13cf6436-1dfd-49fa-b548-5c2e5d746e81/registry-server/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.750071 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-content/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.765043 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-utilities/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.800677 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-content/0.log" Jan 27 08:06:43 crc kubenswrapper[4764]: I0127 08:06:43.999787 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-utilities/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.040720 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/extract-content/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.082725 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgrxd_9f9dc94e-9e12-4bcf-8074-d996b8003e3a/registry-server/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.191283 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-utilities/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.375640 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-utilities/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.383987 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-content/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.394347 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-content/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.570423 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-content/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.613660 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/extract-utilities/0.log" Jan 27 08:06:44 crc kubenswrapper[4764]: I0127 08:06:44.955381 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swd5n_c0516c8e-82e6-457c-97e7-503dbf7fb615/registry-server/0.log" Jan 27 08:07:23 crc kubenswrapper[4764]: I0127 08:07:23.762803 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:07:23 crc kubenswrapper[4764]: I0127 08:07:23.764400 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:07:53 crc kubenswrapper[4764]: I0127 08:07:53.762962 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:07:53 crc kubenswrapper[4764]: I0127 08:07:53.763452 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:08:23 crc kubenswrapper[4764]: I0127 08:08:23.762998 4764 patch_prober.go:28] interesting pod/machine-config-daemon-k8qgf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:08:23 crc kubenswrapper[4764]: I0127 08:08:23.763648 4764 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:08:23 crc kubenswrapper[4764]: I0127 08:08:23.763690 4764 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" Jan 27 08:08:23 crc kubenswrapper[4764]: I0127 08:08:23.764382 4764 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6"} pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:08:23 crc kubenswrapper[4764]: I0127 08:08:23.764426 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerName="machine-config-daemon" containerID="cri-o://d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" gracePeriod=600 Jan 27 08:08:23 crc kubenswrapper[4764]: E0127 08:08:23.901138 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:08:24 crc kubenswrapper[4764]: I0127 08:08:24.565846 4764 generic.go:334] "Generic (PLEG): container finished" podID="a061a513-f05f-4aa7-8310-5e418f3f747d" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" exitCode=0 Jan 27 08:08:24 crc kubenswrapper[4764]: I0127 08:08:24.565915 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" event={"ID":"a061a513-f05f-4aa7-8310-5e418f3f747d","Type":"ContainerDied","Data":"d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6"} Jan 27 08:08:24 crc kubenswrapper[4764]: I0127 08:08:24.566046 4764 scope.go:117] "RemoveContainer" containerID="6efa855611d4ec5bacd10b31fa54274b2941f6c776b9a9f82ca35598d6fc4ae7" Jan 27 08:08:24 crc kubenswrapper[4764]: I0127 08:08:24.566954 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:08:24 crc kubenswrapper[4764]: E0127 08:08:24.567300 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:08:31 crc kubenswrapper[4764]: I0127 08:08:31.650935 4764 generic.go:334] "Generic (PLEG): container finished" podID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerID="f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7" exitCode=0 Jan 27 08:08:31 crc kubenswrapper[4764]: I0127 08:08:31.651502 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" event={"ID":"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2","Type":"ContainerDied","Data":"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7"} Jan 27 08:08:31 crc kubenswrapper[4764]: I0127 08:08:31.652826 4764 scope.go:117] "RemoveContainer" containerID="f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7" Jan 27 08:08:31 crc kubenswrapper[4764]: I0127 08:08:31.815767 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kxdq4_must-gather-ff6nd_bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2/gather/0.log" Jan 27 08:08:36 crc kubenswrapper[4764]: I0127 08:08:36.438986 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:08:36 crc kubenswrapper[4764]: E0127 08:08:36.439723 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.223659 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kxdq4/must-gather-ff6nd"] Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.224010 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kxdq4/must-gather-ff6nd"] Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.224203 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="copy" containerID="cri-o://95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea" gracePeriod=2 Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.685399 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kxdq4_must-gather-ff6nd_bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2/copy/0.log" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.686146 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.715866 4764 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kxdq4_must-gather-ff6nd_bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2/copy/0.log" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.716173 4764 generic.go:334] "Generic (PLEG): container finished" podID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerID="95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea" exitCode=143 Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.716219 4764 scope.go:117] "RemoveContainer" containerID="95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.716360 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kxdq4/must-gather-ff6nd" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.741800 4764 scope.go:117] "RemoveContainer" containerID="f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.784169 4764 scope.go:117] "RemoveContainer" containerID="95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea" Jan 27 08:08:39 crc kubenswrapper[4764]: E0127 08:08:39.784620 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea\": container with ID starting with 95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea not found: ID does not exist" containerID="95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.784676 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea"} err="failed to get container status \"95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea\": rpc error: code = NotFound desc = could not find container \"95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea\": container with ID starting with 95b3f62d799ae5160b8d8fe6880a1d48e84834cf2778a7106ce8bfd10f58dfea not found: ID does not exist" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.784709 4764 scope.go:117] "RemoveContainer" containerID="f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7" Jan 27 08:08:39 crc kubenswrapper[4764]: E0127 08:08:39.784995 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7\": container with ID starting with f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7 not found: ID does not exist" containerID="f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.785020 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7"} err="failed to get container status \"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7\": rpc error: code = NotFound desc = could not find container \"f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7\": container with ID starting with f20460c1e79b344b0174c962bff591d47c1d00ca32b638659a70d8b09e0503e7 not found: ID does not exist" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.869178 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output\") pod \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.869292 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56zsl\" (UniqueName: \"kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl\") pod \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\" (UID: \"bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2\") " Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.882974 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl" (OuterVolumeSpecName: "kube-api-access-56zsl") pod "bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" (UID: "bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2"). InnerVolumeSpecName "kube-api-access-56zsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:39 crc kubenswrapper[4764]: I0127 08:08:39.971592 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56zsl\" (UniqueName: \"kubernetes.io/projected/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-kube-api-access-56zsl\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:40 crc kubenswrapper[4764]: I0127 08:08:40.021462 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" (UID: "bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:40 crc kubenswrapper[4764]: I0127 08:08:40.077544 4764 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:40 crc kubenswrapper[4764]: I0127 08:08:40.448544 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" path="/var/lib/kubelet/pods/bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2/volumes" Jan 27 08:08:49 crc kubenswrapper[4764]: I0127 08:08:49.478460 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:08:49 crc kubenswrapper[4764]: E0127 08:08:49.479322 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:09:03 crc kubenswrapper[4764]: I0127 08:09:03.438673 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:09:03 crc kubenswrapper[4764]: E0127 08:09:03.439414 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:09:16 crc kubenswrapper[4764]: I0127 08:09:16.439600 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:09:16 crc kubenswrapper[4764]: E0127 08:09:16.440510 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:09:28 crc kubenswrapper[4764]: I0127 08:09:28.447428 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:09:28 crc kubenswrapper[4764]: E0127 08:09:28.448381 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:09:41 crc kubenswrapper[4764]: I0127 08:09:41.438582 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:09:41 crc kubenswrapper[4764]: E0127 08:09:41.439545 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:09:52 crc kubenswrapper[4764]: I0127 08:09:52.439252 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:09:52 crc kubenswrapper[4764]: E0127 08:09:52.439981 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:10:04 crc kubenswrapper[4764]: I0127 08:10:04.439032 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:10:04 crc kubenswrapper[4764]: E0127 08:10:04.440137 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:10:19 crc kubenswrapper[4764]: I0127 08:10:19.438679 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:10:19 crc kubenswrapper[4764]: E0127 08:10:19.439527 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:10:33 crc kubenswrapper[4764]: I0127 08:10:33.438231 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:10:33 crc kubenswrapper[4764]: E0127 08:10:33.439039 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.145278 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148671 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148708 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148729 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="gather" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148738 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="gather" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148748 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="extract-utilities" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148757 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="extract-utilities" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148776 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="extract-content" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148784 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="extract-content" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148802 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="copy" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148809 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="copy" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148820 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148827 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148843 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="extract-utilities" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148851 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="extract-utilities" Jan 27 08:10:43 crc kubenswrapper[4764]: E0127 08:10:43.148884 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="extract-content" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.148892 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="extract-content" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.149130 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="copy" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.149150 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="a973c560-12e4-4a4f-98a2-8c061979a1b0" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.149175 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="2314a85c-a2eb-4a65-9ee7-81bfb1c9e3ae" containerName="registry-server" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.149189 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a688b-8bf9-4cf4-bd5f-2650078aa1b2" containerName="gather" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.150843 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.159860 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.186791 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.186849 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhchg\" (UniqueName: \"kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.186920 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.288209 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.288507 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.288548 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhchg\" (UniqueName: \"kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.288815 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.288878 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.308504 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhchg\" (UniqueName: \"kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg\") pod \"redhat-operators-drl56\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.491408 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:43 crc kubenswrapper[4764]: I0127 08:10:43.952600 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:44 crc kubenswrapper[4764]: I0127 08:10:44.769108 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerID="44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af" exitCode=0 Jan 27 08:10:44 crc kubenswrapper[4764]: I0127 08:10:44.769173 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerDied","Data":"44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af"} Jan 27 08:10:44 crc kubenswrapper[4764]: I0127 08:10:44.769400 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerStarted","Data":"d0acf785bf1225ee8ed7b04a671f68a2f9e5f98dfb6f77ceee45152d05b29297"} Jan 27 08:10:44 crc kubenswrapper[4764]: I0127 08:10:44.771451 4764 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:10:45 crc kubenswrapper[4764]: I0127 08:10:45.796571 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerStarted","Data":"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a"} Jan 27 08:10:46 crc kubenswrapper[4764]: I0127 08:10:46.809529 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerID="ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a" exitCode=0 Jan 27 08:10:46 crc kubenswrapper[4764]: I0127 08:10:46.809778 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerDied","Data":"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a"} Jan 27 08:10:47 crc kubenswrapper[4764]: I0127 08:10:47.438994 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:10:47 crc kubenswrapper[4764]: E0127 08:10:47.439360 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:10:47 crc kubenswrapper[4764]: I0127 08:10:47.839284 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerStarted","Data":"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55"} Jan 27 08:10:47 crc kubenswrapper[4764]: I0127 08:10:47.863143 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drl56" podStartSLOduration=2.374743404 podStartE2EDuration="4.86312168s" podCreationTimestamp="2026-01-27 08:10:43 +0000 UTC" firstStartedPulling="2026-01-27 08:10:44.77117845 +0000 UTC m=+3257.366800976" lastFinishedPulling="2026-01-27 08:10:47.259556736 +0000 UTC m=+3259.855179252" observedRunningTime="2026-01-27 08:10:47.855082961 +0000 UTC m=+3260.450705487" watchObservedRunningTime="2026-01-27 08:10:47.86312168 +0000 UTC m=+3260.458744206" Jan 27 08:10:53 crc kubenswrapper[4764]: I0127 08:10:53.492740 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:53 crc kubenswrapper[4764]: I0127 08:10:53.493533 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:53 crc kubenswrapper[4764]: I0127 08:10:53.579258 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:53 crc kubenswrapper[4764]: I0127 08:10:53.941613 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:53 crc kubenswrapper[4764]: I0127 08:10:53.988033 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:55 crc kubenswrapper[4764]: I0127 08:10:55.914162 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drl56" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="registry-server" containerID="cri-o://1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55" gracePeriod=2 Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.426632 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.540353 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhchg\" (UniqueName: \"kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg\") pod \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.540411 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content\") pod \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.540502 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities\") pod \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\" (UID: \"feb3e064-54ff-49c4-a3d4-358d9ae6b46c\") " Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.542767 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities" (OuterVolumeSpecName: "utilities") pod "feb3e064-54ff-49c4-a3d4-358d9ae6b46c" (UID: "feb3e064-54ff-49c4-a3d4-358d9ae6b46c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.546050 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg" (OuterVolumeSpecName: "kube-api-access-zhchg") pod "feb3e064-54ff-49c4-a3d4-358d9ae6b46c" (UID: "feb3e064-54ff-49c4-a3d4-358d9ae6b46c"). InnerVolumeSpecName "kube-api-access-zhchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.643075 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhchg\" (UniqueName: \"kubernetes.io/projected/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-kube-api-access-zhchg\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.643102 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.923774 4764 generic.go:334] "Generic (PLEG): container finished" podID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerID="1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55" exitCode=0 Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.923814 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerDied","Data":"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55"} Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.923842 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drl56" event={"ID":"feb3e064-54ff-49c4-a3d4-358d9ae6b46c","Type":"ContainerDied","Data":"d0acf785bf1225ee8ed7b04a671f68a2f9e5f98dfb6f77ceee45152d05b29297"} Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.923862 4764 scope.go:117] "RemoveContainer" containerID="1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.923856 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drl56" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.940868 4764 scope.go:117] "RemoveContainer" containerID="ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:56.963582 4764 scope.go:117] "RemoveContainer" containerID="44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.002119 4764 scope.go:117] "RemoveContainer" containerID="1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55" Jan 27 08:10:57 crc kubenswrapper[4764]: E0127 08:10:57.002579 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55\": container with ID starting with 1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55 not found: ID does not exist" containerID="1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.002612 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55"} err="failed to get container status \"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55\": rpc error: code = NotFound desc = could not find container \"1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55\": container with ID starting with 1b37f1e07d865e3ed33285646780e0ebe20aabc0552ae5d9380e7ccb80e3db55 not found: ID does not exist" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.002638 4764 scope.go:117] "RemoveContainer" containerID="ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a" Jan 27 08:10:57 crc kubenswrapper[4764]: E0127 08:10:57.002928 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a\": container with ID starting with ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a not found: ID does not exist" containerID="ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.002954 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a"} err="failed to get container status \"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a\": rpc error: code = NotFound desc = could not find container \"ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a\": container with ID starting with ba04ef97121a3e195d88526f60404a0cb8070606e457bd1ca915238a5da5ee9a not found: ID does not exist" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.002969 4764 scope.go:117] "RemoveContainer" containerID="44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af" Jan 27 08:10:57 crc kubenswrapper[4764]: E0127 08:10:57.003241 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af\": container with ID starting with 44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af not found: ID does not exist" containerID="44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.003292 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af"} err="failed to get container status \"44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af\": rpc error: code = NotFound desc = could not find container \"44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af\": container with ID starting with 44b1fac34b233ff623062c6212243ff4aa11cf5e26289dae78e21630247564af not found: ID does not exist" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.302460 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feb3e064-54ff-49c4-a3d4-358d9ae6b46c" (UID: "feb3e064-54ff-49c4-a3d4-358d9ae6b46c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.357762 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb3e064-54ff-49c4-a3d4-358d9ae6b46c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.569853 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:57 crc kubenswrapper[4764]: I0127 08:10:57.592789 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drl56"] Jan 27 08:10:58 crc kubenswrapper[4764]: I0127 08:10:58.451497 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" path="/var/lib/kubelet/pods/feb3e064-54ff-49c4-a3d4-358d9ae6b46c/volumes" Jan 27 08:11:02 crc kubenswrapper[4764]: I0127 08:11:02.439208 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:11:02 crc kubenswrapper[4764]: E0127 08:11:02.440138 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.001821 4764 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:07 crc kubenswrapper[4764]: E0127 08:11:07.002821 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="registry-server" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.002838 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="registry-server" Jan 27 08:11:07 crc kubenswrapper[4764]: E0127 08:11:07.002855 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="extract-utilities" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.002864 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="extract-utilities" Jan 27 08:11:07 crc kubenswrapper[4764]: E0127 08:11:07.002874 4764 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="extract-content" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.002881 4764 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="extract-content" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.003112 4764 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb3e064-54ff-49c4-a3d4-358d9ae6b46c" containerName="registry-server" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.005367 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.015990 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.047651 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.047862 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42mn\" (UniqueName: \"kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.047889 4764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.149694 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.149856 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42mn\" (UniqueName: \"kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.149884 4764 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.150391 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.150621 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.181428 4764 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42mn\" (UniqueName: \"kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn\") pod \"community-operators-wgnk6\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.328605 4764 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:07 crc kubenswrapper[4764]: I0127 08:11:07.842016 4764 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:08 crc kubenswrapper[4764]: I0127 08:11:08.022999 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerStarted","Data":"900adc9d651605aef362ea9694e996b0299d4db089ab047be0d565f7672aca2b"} Jan 27 08:11:09 crc kubenswrapper[4764]: I0127 08:11:09.032664 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8c17811-36ba-4502-a5fe-394382a3afe5" containerID="140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202" exitCode=0 Jan 27 08:11:09 crc kubenswrapper[4764]: I0127 08:11:09.032709 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerDied","Data":"140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202"} Jan 27 08:11:10 crc kubenswrapper[4764]: I0127 08:11:10.043494 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerStarted","Data":"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb"} Jan 27 08:11:11 crc kubenswrapper[4764]: I0127 08:11:11.056491 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8c17811-36ba-4502-a5fe-394382a3afe5" containerID="eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb" exitCode=0 Jan 27 08:11:11 crc kubenswrapper[4764]: I0127 08:11:11.056537 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerDied","Data":"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb"} Jan 27 08:11:12 crc kubenswrapper[4764]: I0127 08:11:12.066761 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerStarted","Data":"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f"} Jan 27 08:11:12 crc kubenswrapper[4764]: I0127 08:11:12.090010 4764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wgnk6" podStartSLOduration=3.369218594 podStartE2EDuration="6.089985542s" podCreationTimestamp="2026-01-27 08:11:06 +0000 UTC" firstStartedPulling="2026-01-27 08:11:09.034453624 +0000 UTC m=+3281.630076140" lastFinishedPulling="2026-01-27 08:11:11.755220562 +0000 UTC m=+3284.350843088" observedRunningTime="2026-01-27 08:11:12.082358074 +0000 UTC m=+3284.677980610" watchObservedRunningTime="2026-01-27 08:11:12.089985542 +0000 UTC m=+3284.685608068" Jan 27 08:11:16 crc kubenswrapper[4764]: I0127 08:11:16.438774 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:11:16 crc kubenswrapper[4764]: E0127 08:11:16.439854 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:11:17 crc kubenswrapper[4764]: I0127 08:11:17.329327 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:17 crc kubenswrapper[4764]: I0127 08:11:17.329392 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:17 crc kubenswrapper[4764]: I0127 08:11:17.373135 4764 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:18 crc kubenswrapper[4764]: I0127 08:11:18.162355 4764 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:18 crc kubenswrapper[4764]: I0127 08:11:18.215116 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.137222 4764 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wgnk6" podUID="b8c17811-36ba-4502-a5fe-394382a3afe5" containerName="registry-server" containerID="cri-o://ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f" gracePeriod=2 Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.584620 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.610050 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities\") pod \"b8c17811-36ba-4502-a5fe-394382a3afe5\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.610203 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v42mn\" (UniqueName: \"kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn\") pod \"b8c17811-36ba-4502-a5fe-394382a3afe5\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.610582 4764 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content\") pod \"b8c17811-36ba-4502-a5fe-394382a3afe5\" (UID: \"b8c17811-36ba-4502-a5fe-394382a3afe5\") " Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.611097 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities" (OuterVolumeSpecName: "utilities") pod "b8c17811-36ba-4502-a5fe-394382a3afe5" (UID: "b8c17811-36ba-4502-a5fe-394382a3afe5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.611460 4764 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.617045 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn" (OuterVolumeSpecName: "kube-api-access-v42mn") pod "b8c17811-36ba-4502-a5fe-394382a3afe5" (UID: "b8c17811-36ba-4502-a5fe-394382a3afe5"). InnerVolumeSpecName "kube-api-access-v42mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.677902 4764 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8c17811-36ba-4502-a5fe-394382a3afe5" (UID: "b8c17811-36ba-4502-a5fe-394382a3afe5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.713681 4764 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v42mn\" (UniqueName: \"kubernetes.io/projected/b8c17811-36ba-4502-a5fe-394382a3afe5-kube-api-access-v42mn\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:20 crc kubenswrapper[4764]: I0127 08:11:20.713721 4764 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8c17811-36ba-4502-a5fe-394382a3afe5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.151961 4764 generic.go:334] "Generic (PLEG): container finished" podID="b8c17811-36ba-4502-a5fe-394382a3afe5" containerID="ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f" exitCode=0 Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.152020 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerDied","Data":"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f"} Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.152059 4764 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgnk6" event={"ID":"b8c17811-36ba-4502-a5fe-394382a3afe5","Type":"ContainerDied","Data":"900adc9d651605aef362ea9694e996b0299d4db089ab047be0d565f7672aca2b"} Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.152082 4764 scope.go:117] "RemoveContainer" containerID="ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.152991 4764 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgnk6" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.179029 4764 scope.go:117] "RemoveContainer" containerID="eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.214282 4764 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.221600 4764 scope.go:117] "RemoveContainer" containerID="140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.224686 4764 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wgnk6"] Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.254388 4764 scope.go:117] "RemoveContainer" containerID="ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f" Jan 27 08:11:21 crc kubenswrapper[4764]: E0127 08:11:21.254980 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f\": container with ID starting with ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f not found: ID does not exist" containerID="ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.255025 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f"} err="failed to get container status \"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f\": rpc error: code = NotFound desc = could not find container \"ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f\": container with ID starting with ba8fe388fb5b53684257960591f05016d10982c0133a8022cc54dcffdcc8f58f not found: ID does not exist" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.255051 4764 scope.go:117] "RemoveContainer" containerID="eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb" Jan 27 08:11:21 crc kubenswrapper[4764]: E0127 08:11:21.255505 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb\": container with ID starting with eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb not found: ID does not exist" containerID="eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.255544 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb"} err="failed to get container status \"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb\": rpc error: code = NotFound desc = could not find container \"eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb\": container with ID starting with eadcb2bc8d1fe403c5e24e7bf27bd1dd1fc19453e7158579de50f8216db787cb not found: ID does not exist" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.255573 4764 scope.go:117] "RemoveContainer" containerID="140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202" Jan 27 08:11:21 crc kubenswrapper[4764]: E0127 08:11:21.255912 4764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202\": container with ID starting with 140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202 not found: ID does not exist" containerID="140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202" Jan 27 08:11:21 crc kubenswrapper[4764]: I0127 08:11:21.255936 4764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202"} err="failed to get container status \"140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202\": rpc error: code = NotFound desc = could not find container \"140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202\": container with ID starting with 140fb92ee69ece0888715c5db10caad8866094abc4e2e87c4eeb08075209e202 not found: ID does not exist" Jan 27 08:11:22 crc kubenswrapper[4764]: I0127 08:11:22.452644 4764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c17811-36ba-4502-a5fe-394382a3afe5" path="/var/lib/kubelet/pods/b8c17811-36ba-4502-a5fe-394382a3afe5/volumes" Jan 27 08:11:31 crc kubenswrapper[4764]: I0127 08:11:31.438917 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:11:31 crc kubenswrapper[4764]: E0127 08:11:31.439570 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:11:43 crc kubenswrapper[4764]: I0127 08:11:43.438075 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:11:43 crc kubenswrapper[4764]: E0127 08:11:43.439500 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:11:56 crc kubenswrapper[4764]: I0127 08:11:56.439098 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:11:56 crc kubenswrapper[4764]: E0127 08:11:56.440027 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d" Jan 27 08:12:07 crc kubenswrapper[4764]: I0127 08:12:07.438150 4764 scope.go:117] "RemoveContainer" containerID="d9be0899d8221f9f859996a787455719f7b71de1e7418b398692ec4f61882ef6" Jan 27 08:12:07 crc kubenswrapper[4764]: E0127 08:12:07.438866 4764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8qgf_openshift-machine-config-operator(a061a513-f05f-4aa7-8310-5e418f3f747d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8qgf" podUID="a061a513-f05f-4aa7-8310-5e418f3f747d"